X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

The sweet smell of success: how can we help educators develop a ‘nose’ for evidence they can use in the classroom?

By Blog Editor, IOE Digital, on 8 February 2018

Mutlu Cukurova and Rose Luckin
A good nose for what constitutes solid evidence: it’s something a scientist is lost without. This finely tuned ‘nose’ is not innate, it is the result of years of practice and learning. This practice and learning through constantly questioning and seeking evidence for decisions and beliefs is something that we academics apply equally to our teaching as to our research. However, recent headlines cast doubt on the belief that other practitioners are able to make good use of research. An article on the TES website argues that “Teacher involvement in research has no impact on pupils’ outcomes”. Can this really be true? If so, what can we do to ensure that the billions of pounds spent on educational research are made accessible to, and used by, our educators?
The realisation that this evidence-informed ‘nose’ is not necessarily shared by many of those involved in education, and in particular those involved in the design and use of technology for education, has also became starkly apparent to us through our development of a training course to help entrepreneurs and educators to understand research evidence. This enterprise is part of the EDUCATE project at the UCL Knowledge Lab.
One of our aims is (more…)

How can research truly inform practice? It takes a lot more than just providing information

By Blog Editor, IOE Digital, on 14 December 2017

Jonathan Sharples. 
The Education Endowment Foundation’s latest evaluation report, the ‘Literacy Octopus‘, provides plenty of food for thought for anyone interested in improving the way research evidence informs practice, not just in education, but across sectors.
This pair of large, multi-armed trials evaluated different ways of engaging schools with a range of evidence-based resources and events. The common focus was on supporting literacy teaching and learning in primary schools.
The findings make it clear that our notion of ‘research use’ needs to extend beyond (more…)

Evidence-based practice: why number-crunching tells only part of the story

By Blog Editor, IOE Digital, on 14 March 2013

Rebecca Allen
As a quantitative researcher in education I am delighted that Ben Goldacre – whose report  Building Evidence into Education was published today – has lent his very public voice to the call for greater use of randomised controlled trials (RCTs) to inform educational policy-making and teaching practice.
I admit that I am a direct beneficiary of this groundswell of support. I am part of an international team running a large RCT to study motivation and engagement in 16-year-old students, funded by the Education Endowment Foundation. And we are at the design stage for a new RCT testing a programme to improve secondary school departmental practice.
The research design in each of these studies will give us a high degree of confidence in the policy recommendations we are able to make.
Government funding for RCTs is very welcome, but with all this support why is there a need for Goldacre to say anything at all about educational research? One hope is that teachers hear and respond to his call for a culture shift, recognise that “we don’t know what’s best here” and embrace the idea of taking part in this research (and indeed suggest teaching programmes themselves).

It is very time-consuming and expensive to get schools to take part in RCTs, (because most say no). Drop-out of schools during trials can be high, especially where the school has been randomised into an intervention they would rather not have, and it is difficult to get the data we need to measure the impact of the intervention on time..
However, RCTs cannot sit in a research vacuum.
Ben Goldacre does recognise that different methods are useful for answering different questions, so a conversation needs to be started about where the balance of research funding for different types of educational research best lies.
It is important that RCTs sit alongside a large and active body of qualitative and quantitative educational research. One reason is that those designing RCTs have to design a “treatment” – this is the policy or programme that is being tested to see if it works. This design has to come from somewhere, since without infinite time and money we cannot simply draw up a list of all possible interventions and then test them one by one. To produce our best guess of what works we may use surveys, interviews and observational visits that took place as part of a qualitative evaluation of a similar policy in the past. We also used descriptions collected by ethnographers (researchers who are “people watchers”). And of course we draw on existing quantitative data, such as exam results.
All of this qualitative and quantitative research is expensive to carry out, but without it we would have a poorly designed treatment with little likelihood of any impact on teacher practice. We may find that, without the experience of other research, we might carry out the programme we are testing poorly for reasons we failed to anticipate.
The social science model of research is not ‘what works?’ but rather ‘what works for whom and under what conditions?’
Education and medicine do indeed have some similarities, but the social context in which a child learns shapes outcomes far more than it does the response of a body to a new drug. RCTs may tell us something about what works for the schools involved in the experiment, but less about what might work in other social contexts with different types of teachers and children. Researchers call this the problem of external validity. Our current RCT will tell us something about appropriate pupil motivation and engagement interventions for 16-year-old teenagers in relatively deprived schools, but little that is useful for understanding 10-year-old children or indeed 16-year-olds in grammar schools or in Italian schools.
The challenge of external validity cannot be underestimated in educational settings. RCTs cannot give us THE answer; they give us AN answer. And its validity declines as we try to implement the policy in different settings and over different time frames. This actually poses something of a challenge to the current model of recruiting schools to RCTs, where many have used “convenient” samples, such as a group of schools in an academy chain who are committed to carrying out educational research. This may provide valuable information to the chain about best practice for its own schools, but cannot tell us how the same intervention would work across the whole country.
Social contexts change faster than evolution changes our bodies. Whilst I would guess that taking a paracetamol will still relieve a headache in 50 years’ time, I suspect that the best intervention to improve pupil motivation and engagement will look very different to those we are testing in an RCT today. This means that our knowledge base of “what works” in education will always decay and we will have to constantly find new research money to watch how policies evolve as contexts change and to re-test old programmes in new social settings.