X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

Does traditional grammar instruction improve children’s writing ability?

By Blog Editor, IOE Digital, on 5 July 2016


Alice Sullivan and Dominic Wyse. 
Children in England have recently taken their statutory tests at age 10-11 (commonly known as Key Stage 2 SATs). The results, published today, show that the pass rate has plummeted compared to last year. This is because the nature of the tests changed dramatically in 2016. We focus here on why the new English tests have been so difficult for children to pass – and why most parents would struggle to pass the tests too.
(more…)

Why education policy debates need a sociological voice

By Blog Editor, IOE Digital, on 17 March 2016

Geoff Whitty
Attending a gathering of philosophers and sociologists of education this week brought home to me how much closer those two groups are now in their analyses of education compared to when I first worked as a sociologist of education in the early 1970s. That encounter also led me to recall that, at the same time, there were major battles within the sociology of education itself, between the so-called ‘old’ sociology of education of A.H. Halsey and his colleagues at Oxford and the ‘new’ sociology of education, associated with Michael F.D. Young and others at the Institute of Education in London. I discussed these disputes over the sources and significance of change in education and society in my first authored book, Sociology and School Knowledge (Methuen, 1985).
My newest book, Research and Policy in Education, published last month by UCL IOE Press, reflects my attempt to make sense of the relationship between education research and education policy in the years since I stood down from the Karl Mannheim Chair in the Sociology of Education at the Institute of Education in September 2000 to become Director. While working on the new book, I found myself increasingly drawn back to my roots as a sociologist of education. Even though such work is not necessarily undertaken first and foremost with a view to policy impact, I found sociological work really helpful in understanding why some of the policies I was discussing hadn’t had the impact that politicians claimed they would have. This applied not only to contentious initiatives (more…)

REF results: what’s the spin and what are the real stories?

By Blog Editor, IOE Digital, on 18 December 2014

Chris Husbands
In 1993, Shane Warne, the great Australian spin bowler bowled to England’s Mike Gatting. The ball, heading towards the leg stump was played by Mike Gatting, and then, at the last moment, in a twist of pure genius from Warne, the ball turned sharply and took out Mike Gatting’s off-stump. You can see the ‘ball of the century’ on any number of video clips: the most remarkable spin bowling any one can recall.
But as spin, it pales in comparison with the efforts of university communications departments following the publication – at one minute past midnight on December 18th – of the results of the 2014 REF. (more…)

Election silly season: is research an ornament, a luxury good or ammunition in a war?

By Blog Editor, IOE Digital, on 12 September 2014

Chris Brown
As with many things in our Western consumer culture, research use may be conceived as an act of consumption. Correspondingly, research is often treated by its users as they would a consumer object, much like a coffee maker or television. In the case of educational policy making the research ‘consumer object’ seems to represent one of two perspectives; it is either viewed as a luxury item – with high use value and prestige, or its use is limited and it is primarily employed, much as we employ sparkly trinkets, to distract attention. (more…)

Boyhood: the first longitudinal movie?

By Blog Editor, IOE Digital, on 22 August 2014

Evaluating social interventions: What works? In whose terms? And how do we know it works?

By Blog Editor, IOE Digital, on 13 August 2014

Sandy Oliver
What do farmers attending schools in the African fields have in common with women attending maternity clinics in England? Both groups have played a role in rigorous academic research. They have influenced studies evaluating programmes that were designed to improve their lives.
In the mid-1990s Farmer Field Schools were spreading across Africa. These schools use active, hands-on learning and collaboration to improve agricultural productivity. Their strong participatory ethos makes the field schools very relevant to those involved.
Logic tells us that these schools should make a big difference to the farmers’ yields and to their lives. However, a strong theoretical base, enthusiasm and participatory principles don’t guarantee success. A research study seeking to collect, analyse and synthesise a wide range of evaluations of field schools found their success is largely limited to pilot projects. Furthermore, success is less likely with poorer farmers and women farmers.
It would be helpful to know how Farmer Field Schools compared with other approaches to improving agriculture – but the authors found a dearth of such rigorous impact evaluations. They see a need for studies that track potential changes through the whole course of the project — from the preparatory work of training facilitators and identifying potential participating farmers through to the ideas they discuss, try out and share with their neighbours.
They particularly recommend rigorous evaluations assessing impact in broad terms – not just agricultural productivity, but also empowerment, health and the environment.
Carrying out such evaluations is highly skilled work. In fact, knowing how to commission research that will yield really practical information – that will answer the questions and concerns of the people whose lives it is seeking to benefit – is not straightforward either.
Such issues will be part of a short course in Evaluation for Development Programmes offered by the London International Development Centre (LIDC) later this year, on which I will be teaching.
The course will offer opportunities for participants and tutors to all learn from each other, and is designed for:

  • development professionals who commission and use evaluation studies
  • academics who plan to work in multi-disciplinary teams on future evaluation studies of development programmes and
  • postgraduate students who wish to gain a better understanding of the terminology and fundamentals of evaluation methods.

Our vision for the new course is that it will help to achieve effective and appropriate support for better health and wellbeing through training professionals who design social interventions. It will help them to understand, commission, use and interpret evaluation studies, and work with potential beneficiaries such as farmers in Africa or pregnant women in the UK.
Research on anti-smoking support for pregnant women in the UK offers a contrasting example of why rigorous academic evaluation of the impact of social interventions is not enough.
In many high income countries in the 1990s, pregnant women were commonly advised to avoid or give up smoking for the health of their baby. The success of this strategy was assessed by rigorous randomised controlled trials, which reported reduced proportions of women smoking and fewer babies born too soon, too small or sick.
However, these trials took little notice of other criteria considered important by health promotion specialists and pregnant women themselves. What, they wondered, were the effects of encouraging women to give up smoking, if smoking helped them cope with the daily pressures of disadvantaged lives? Might asking midwives to counsel women against smoking interfere with supportive midwife-mother relationships?
Concerned practitioners and women who smoked (some who gave up, and some who did not) discussed their theories about the impact of smoking cessation programmes in pregnancy. At that time these theories had not been tested. Drawing attention to this gap in our collective knowledge encouraged a new generation of randomised controlled trials that took into account the social and emotional consequences, not just biomedical measures, of smoking cessation programmes. Subsequent studies showed that concerns about potential harm, such as stressed mothers and damaged family relationships, were largely unfounded. Now national and international guidelines are based on rigorous evaluations designed with women, not just for them.
These two very different examples raise questions in common about theories of change, research methodology, criteria for success, equity and ethics. They also feature not just individual studies, but whole literatures of similar studies which strengthen the evidence underpinning current recommendations. These key characteristics for evaluating complex social interventions require research approaches that cut across traditional academic disciplines, and draw heavily on the policy, practice and personal knowledge of those directly involved.
 

TAs: only a research-policy-practice trialogue will lead to evidence-based policy-making

By Blog Editor, IOE Digital, on 17 June 2014

Rob Webster
The economists are at it again!
This time last year, the Reform think tank outlined cost-saving measures that, it claimed, could be made without damaging pupils’ education. Chief among them was cutting the number of teaching assistants (TAs) in schools.
The rationale was based on findings from our Deployment and Impact of Support Staff (DISS) project, which found that children who received the most support from TAs consistently made less progress than similar pupils who received less TA support – even after controlling for factors like prior attainment and level of special educational need (SEN).
Thankfully, the recommendation to axe TAs got short shrift from the DfE. Not so fortunate the elementary school system in North Carolina, USA.
Last month, the state Senate proposed a $21.2 billion budget plan, $470 million of which will pay for an average 11% pay rise for teachers. Half the funds for this, however, will come from cutting the equivalent of 7,400 TA jobs – all but eliminating TAs in second and third grades (7-9 years).
Perhaps unsurprisingly, this decision – expected to be ratified by lawmakers by 30 June – has sparked petitions and protests. A local educationalist likened the situation to paying for a liver transplant by selling a kidney!
The context for the controversy is on-going attempts by politicians to improve educational standards in North Carolina. Echoing the conclusions of the Reform report, State Senate leader Phil Berger said achieving this is about using research evidence to prioritise resources: ‘to target our dollars to those things that are shown to improve student growth’. For Berger, this means making teaching financially more appealing in a state where attracting and retaining high quality teachers has been a perennial problem.
Reliance on the inconclusive research evidence on the effect of teacher pay on educational standards to inform policy is worrying. So to hear too that, with an eerie sense of déjà-vu, it turns out a partial reading of the DISS project findings has also been used to justify the proposals, should raise questions about politicians’ use of empirical research and their proclaimed fondness for evidence-based policy.
It cannot be avoided that high amounts of TA support has unintended consequences for pupils, especially for those with SEN, but our research is very clear about the reasons. It is decisions made by school leaders and teachers about – not by – TAs, in terms of their deployment and preparation that best explain the DISS results. This vital message seems to have bypassed state legislators.
As my colleagues and I never tire pointing out, the DISS results do not suggest that getting rid of TAs will improve outcomes, if all other factors remain equal; if anything, it will create more problems.
Teachers in North Carolina may be about to see their salaries increase and – as Berger and others in the Senate acknowledge – their jobs transform, but with no additional teachers coming into the system, plans to reduce class sizes dropped, no proposals to ensure teachers are not overworked or receive training to help them work with children with special needs, they will earn every single dollar.
For all the talk of basing policy decisions on research evidence, the situation in North Carolina is another example of the kind of poorly planned and expensive experiments with pupils’ learning and adults’ careers and well-being that are becoming worryingly commonplace in public education systems the world over.
These revelations from across the Atlantic should be troubling for the research community too. Just recently Louise Stoll and Chris Brown wrote on this blog about collaborative models of knowledge exchange in education: efforts to translate and transfer research findings into practical tools and strategies for practitioners.
A team of us at the IOE are currently developing our own model of knowledge mobilisation based on the work we’ve undertaken with schools on our Maximising the Impact of TAs programme.
Our experience has been that these two-way efforts between schools and universities can be extremely fruitful and mutually beneficial to the processes of teaching and research. Yet the essential need for policymakers to be involved in the process of converting knowledge into policy and practice is writ large over the events in North Carolina.
Selective readings and misrepresentations of research evidence by detached decision-makers of findings from hard won (often taxpayer-funded) empirical research, which is dependent on co-operation with and contributions from busy practitioners working in high-pressure environments, poses a threat to the trust between researchers and educators that underpins collaborative research and development – not to mention the relationship that each group has with the public.
Only recently has the UK Government clarified its somewhat ‘hands-off’ position on TAs. Whilst there is obvious appeal in giving school leaders autonomy to make their own staffing decisions, given the vast sums of public money involved in employing TAs and the high stakes nature of education generally, it seems a rather relaxed approach.
Our emerging model of knowledge mobilisation recognises the essential need for policymakers’ participation in turning the research-practice dialogue, into a research-policy-practice trialogue. Their willingness to engage would be a clear commitment to their much-vaunted faith in evidence-informed policy and practice.
 
Rob Webster is a research associate at the Institute of Education and freelance consultant/trainer. He is grateful to Andy Curliss of The News & Observer, North Carolina, for bringing this story to his attention.
 

'Knowledge exchange' between researchers and practitioners must be a two-way street

By Blog Editor, IOE Digital, on 27 May 2014

 Louise Stoll and Chris Brown
Both of us are fascinated by how research finds its way into policy and practice. Most researchers hope their findings will be used, but engaging people isn’t always easy or straightforward.
It’s good to see an increase in initiatives focusing on this challenge – for example, the Education Endowment Fund’s recent call for bids in relation to encouraging the uptake of research in schools. Many terms are used to describe the process – dissemination, knowledge transfer, knowledge mobilisation, research utilisation to name a few. Whatever their intention, the message they can convey to practitioners is that researchers have the knowledge that practitioners need to receive.
Our attention has been caught, though, by the term ‘knowledge exchange’. This suggests a two-way flow in a more equal relationship, which makes a lot of sense. Everyone has their own knowledge and experience to share and research can enrich this, as well as pushing researchers to think again about what their findings mean in different contexts.
An R&D project, funded through the Economic and Social Research Council’s (ESRC’s) Knowledge Exchange Opportunities Scheme has been giving us the opportunity to explore researcher/practitioner relationships in more depth. Over the last six months, along with our colleagues Karen Spence-Thomas and Carol Taylor, we have been working with Challenge Partners, a group of more than 230 state-funded schools across England that work collaboratively to enhance the quality of their teaching and leadership, with an ultimate aim of improving outcomes for children and young people. Challenge Partners (CP) aim to provide a vehicle for their schools to learn from the best of their peers.
Our project has been adding research into this mix. It’s exploring and learning about establishing an expanding and sustainable network of middle leaders (such as department heads, subject leaders and key stage leaders) across CP schools that can: exchange evidence-informed knowledge about effective middle leadership that changes teacher practice; track its impact; and find powerful ways to exchange the outcomes of their applied project work more widely within and beyond the partnership to benefit a broader range of educators. For a summary of our project questions and the project, see: ESRC Middle Leaders Project.
In workshops, we share both research findings and effective practice. These are then blended together to create new knowledge that middle leaders use to design and refine processes and tools to help them lead more effectively and track their impact. In between sessions, the middle leaders test new ideas and trial tools with colleagues and teams both in their own and in other schools. They do this both via face-to-face engagement and through social networking. With us they will also be developing processes to embed the notion of sharing high quality research-informed practice between schools in their own networks and for practitioners in other networks. We have a parallel evaluation strand where our researchers and researchers from two Challenge Partners (CP) schools and the CP office are collecting baseline and follow up information, and following project activities.
Partnership is absolutely critical. We co-designed the project with CP, are now involving middle leaders in planning and facilitating sessions, and are co-evaluating the project. Through this, we are trying to model knowledge exchange and collaboration by drawing on the expertise and practices of researchers, knowledge exchange professionals (a term used by the ESRC to describe people who help translate research findings) and practitioners. We hope this will increase the project’s potential to benefit the middle leaders and their colleagues and pupils.
Ours is a two-way relationship: we are learning from our partners as well as them from us, and we have combined our research knowledge, Challenge Partners’ prior experience and published knowledge, and the middle leaders’ knowledge. At times this challenges our thinking – we are tracking this as well – but we know that powerful professional learning does just that.
We will be back with an update in a few months.
 

Understanding impact: what does it actually mean?

By Blog Editor, IOE Digital, on 9 May 2014

Chris Husbands 
Research changes lives. Somewhere along the way, every research project involves a question which is about making a difference to the way people think, behave or interact. In applied social science – the field in which the IOE excels – research sets out to understand, shape and change the world around us.
The idea of research ‘impact’ has become important in research management and funding. Universities are now held to account for the impact of their research, not least through the Research Excellence Framework. But ideas about ‘impact’ and how to secure it vary.
In popular accounts of science, research is sometimes described as ‘earth shattering’, as if it creates something like a meteorite crater reshaping the landscape for ever. An example might be Frederick Sanger’s development of rapid DNA sequencing in the 1970s, which has transformed practices across any number of fields.
But there are other images to describe impact. Not all research has what the LSE research team looking at impact call an ‘auditable’ consequence. They comment that research applied in practice is “always routinized and simplified in use” so that over time, the impact fades like ripples in a pond.
The University of the Arts talks of its research as having creative outcomes that enhance cultural life and provides examples of artworks which create possibilities and shift perceptions: ideas which float into the air like dandelion seeds.
The impact of some research is apparent quickly – though almost never as rapidly as the tabloid newspapers which almost weekly trumpet miracle breakthroughs would have us believe – whereas in other cases it can take decades before the value of research becomes apparent.
Not only does the IOE itself undertake research which seeks to have an impact, it’s also interested in understanding what impact looks like, what it means and how it happens. At a recent conference we explored the linked questions of research impact and public engagement: the relationships between research, policy, practice and improvement, are things some of my colleagues try to understand.
The ESRC defines research impact as “the demonstrable contribution that excellent research makes to society and the economy“. This suggests three components: the demonstrable nature of the contribution, the quality of the research, and the focus on both society and the economy. Successful impact means holding all three in creative relationship: without any of them, the other two are diminished. Research which is not excellent will not make a demonstrable contribution; research which sits unpublished and unread will not, whatever its methodological sophistication, make a demonstrable contribution and so on.
Understandings of impact – or of knowledge mobilisation and its dynamics – have been transformed over the last fifteen years, as the barriers to, and strategies for making creative use of research to impact more directly on people’s lives have become clearer and ways of engaging the public in the dynamics of research have developed. No research – however excellent – ever simply has an ‘impact’.
Richard Doll discovered that smoking caused lung cancer in the 1950s, but it took several years and active public health campaigns to change behaviour. In education, the gap between, say, research on assessment for learning (AfL) and AfL practice suggests that – like the idea of the droplet making ripples on a pond – the impact of research can quickly dissipate unless something active is done.
Research always needs mediating – or, to put it differently, research impact needs a plan. Academics used to talk about ‘dissemination’, but thinking has moved far beyond such models – “I research, you listen” – to more creative and nuanced understanding of the ways knowledge moves – and does not move – around organisations and society. We have learnt that while these relationships are complex, they can be managed effectively.
In the early days of work on research impact, thinking focused on ‘what works’, on the assumption that research could tell us what techniques have maximum effectiveness, and that this could in some way be taken to scale by more or less sophisticated dissemination techniques. We have become – individually, institutionally, collectively – more sophisticated than that, and we have done so quickly. We know that ‘how it works’ and ‘why it works’ are just as important and that the effort to link the worlds of research, policy and practice involve commitment and engagement from all parties. In Ontario, the Knowledge Network for Applied Education Research tries to draw key groups together to enhance knowledge mobilisation. Bringing about change in practices is never easy, as anyone who has ever tried to lose weight, get fitter or learn a new language knows.
There’s a nice social enterprise quotation: “success does not matter. Impact does”. The IOE is a socially engaged university. We care about the quality of the research we undertake, and we make sure that it is of the highest quality. But we care equally about the way our research shapes the society it is seeking to understand. We understand that research evidence will always be only one of the factors that influences society, and that other factors always intervene. But we also know that progress has been made in the past in this field and more can be made in future with persistent effort.
For us, ‘impact’ is not an artefact of the 2014 REF, nor an obligatory hoop through which to jump. There is a wonderful line from the 2008 strategic plan for Carnegie Mellon University – and very, very few university strategic plans contain quotable lines. But in 2008 Carnegie Mellon got it right: “we measure excellence by the impact we have on making the world better”.
 

How researchers and the autism community together can bring about real change

By Blog Editor, IOE Digital, on 23 April 2014

Liz Pellicano
In the summer of 2010, as England were being knocked out of the World Cup, something all together more hopeful was happening at Institute of Education. Jonathan Wolff (UCL) and I had invited a collection of scientists, social researchers, parents and autistic people to join a discussion on the way in which autism is understood and investigated in academia today.
Neither of us could have anticipated the response we received. The enthusiasm for the chance to debate and discuss complicated and emotionally charged issues like the ‘cure’ and ‘prevention’ of autism vs. notions of autistic differences and what some call ‘neurodiversity’ was astonishing – despite, and perhaps even because of, widely opposing views.
This event led colleagues and me at the Centre for Research in Autism and Education to consider more fully the need to engage the autism community – autistic people, their family members, those who support them and researchers – in research and its many implications. Over the past few years, we have made efforts to improve awareness of autism research, through our newsletters and social media networks (Twitter, Facebook), and to get people involved in discussions about controversial and complex issues about autism, through our free and public events at the IOE.
The high point of this work so far has been a project, A Future Made Together, funded by Research Autism, which conducted the most comprehensive review of autism research in the UK ever undertaken. Tony Charman, Adam Dinsmore and I set out to discover how much was spent on UK autism research and which areas were being addressed. We consulted with over 1,700 autistic people, their families, practitioners and researchers to understand what they thought of current autism research in the UK and where the funds towards autism research should be prioritised.
Our Report acknowledged the many great strengths of autism research in the UK such as our leading work in the area of cognitive psychology. But it also saw considerable challenges in the years to come. One of these was highlighted in a discussion with parents of children with autism. While they were impressed by the amount of work that goes into autism research, they were not convinced that research had made a real difference to their lives.
One woman said:
“I fill in all these questionnaires and do everything I can to help … but when it comes down to it, it’s not real life. It’s always missing the next step. It’s great you’ve done this research, you’ve listened to my views … but now do something with it.”
Too many people feel that there is a huge gap between knowledge and practice. Research doesn’t seem to help their child catch the train by themselves or keep themselves safe. And it doesn’t say how to get autistic adults into jobs and keep them there.
The people we spoke to said that they don’t want to read about research in academic papers. They want to hear about research in accessible ways. And they want to see real changes and real things happening on the ground for them, for their child, or for the person they work with.
It turns out that there is a good deal of truth in the criticisms. British academics simply haven’t been taking much notice of real-life issues. Our analysis showed that the majority of UK research focuses heavily on ‘basic science’ – neural and cognitive systems, genetics and other risk factors – rather than on targeting the immediate circumstances in which autistic people find themselves, on services, treatments and interventions and education.
The autism community valued the need for basic research to understand better the underlying causes of autism – but they wanted a more balanced profile, weighting research with a direct impact on the daily lives of autistic people more equally with core areas of basic science.
Almost all the researchers I know want to make a difference to people’s lives. But how do we do that?
My view is that we need to take research in radical new directions. Without doubt, we need to continue to develop our world-leading skills in autism science. But significant investment is also needed in areas of autism research currently under-resourced in the UK. And in order to work out which areas need the greatest investment, we ought to be listening to people about what they want from research.
Autism researchers do not do this enough. According to our findings, autistic people, their family members, and even practitioners are rarely involved in the decision-making processes that shape research and its application. Research priorities are thus ordinarily set almost exclusively by funders and academics in specialist fields. This pattern generates concrete problems for those responsible for commissioning local autism services, people working in such services, and for autistic individuals and their families, when attempting to make evidence-based decisions on education, health and social care.
But this is also problematic because of the feeling of exclusion that it engenders. The people that we spoke to often felt disappointed and frustrated at being ‘mined’ for information and having little or no opportunity to learn about the resulting discoveries and what they might mean for them. They also felt as if their expertise and knowledge – what it is like to be autistic, to care for someone who is autistic, or to work with someone who is autistic – was disregarded by researchers. This lack of reciprocity resulted in feelings of distrust and disempowerment. One autistic adult said, “Whatever we say, is that really going to influence anyone?”
We need to turn this around. As researchers, we need to connect more with the people we ‘study’. We need to value and respect the expertise of the autism community and, at the same time, work with them to increase their ‘research literacy’.
At CRAE, we aim to develop innovative approaches to ensure that autistic people, family members and practitioners are able to participate more fully in the decisions that affect their lives, in the research lab, in schools, at work and in local communities. But developing these research-community partnerships is not easy. It takes time, effort and often funding. Institutions, grant-giving bodies and government agencies are promoting public engagement in research but much more needs to be done to develop supportive infrastructure, including providing the necessary training in participatory methods.
The rewards of working together are manifold. In fact, it may be the only way to ensure that the research that we do really counts.
 
CRAE is the winner of this year’s IOE Director’s Prize for public engagement.