X Close

IOE Blog


Expert opinion from IOE, UCL's Faculty of Education and Society


AERA reminds us that education research is part of a genuinely global discourse

By Blog Editor, IOE Digital, on 8 April 2014

Chris Husbands
The annual conference of the American Education Research Association cannot really be described: it has to be experienced. Every year, it attracts almost 20,000 education researchers, not just from North America but from the entire English speaking world, and, in the last decade, increasingly from East Asia. So any individual experience of the conference must still be partial.
For five days, AERA takes over the downtown of a large American city, so the sheer logistics of running the annual conference must be mind boggling. The conference programme is the size of a telephone directory and about as readable: even the app which has been available for the last few years takes some navigation. You have to really know what you are looking for to master the search function, but if you only want to browse it’s difficult – although the AERA2014 app does contain abstracts for the thousands of papers.
In essence, AERA is not one conference but several. AERA is organised into 12 divisions, from administration, organisation and leadership (Division A) through to Education Policy and Politics (Division L), taking in Measurement and Research methodology (Division D) and Learning and Instruction (Division C) with much else besides. Each division runs several parallel sessions at any one time. Then there is the conference of the highlights: the large, set piece lectures and panels led by genuine global stars such as Diane Ravitch (this year on the challenges of quality and equality), Andreas Schleicher (this year on why we should care about international comparisons), Charles Payne (in 2014 on the fiftieth anniversary of the Civil Rights Act) and Linda Darling-Hammond (on issues in the validity of high stakes assessments): their sessions fill the ballrooms of large hotels, standing room only.
Then there is the conference of the post-doctoral researchers, for whom AERA is a grand hiring fair – a good 20-minute performance reporting on your doctorate to a room of perhaps nine people can be instrumental in landing a prestigious position. And of course there is the conference of the corridors: knots of people meeting up to compare experiences of research funding and research policy, to complain about their miserable lot, to plot and to scheme and to gossip, to broker deals and agreements – people who have not seen each other since San Francisco last year and won’t meet again until Chicago next year.
And the range is huge: to deploy some (all too frequently observed) stereotypes, sessions on structural equation modelling led by earnest young think tank econometricians in sharp blazers, sessions on the endless reverberations of race in American education full of lively, disputatious people of colour, sessions on urban school reform led by harassed school superintendents looking for better teacher or school evaluation strategies.
This year’s conference (April 3-7) was in Philadelphia – the conference is always in one of those vast American cities where a wrong turn at one block will take you into parts of town where you’ll come across urban Americans uninterested in the finer points of methodology – and its over-arching theme was “the power of education research for innovation in practice and policy”. Barbara Schneider (Michigan State University), this year’s president, chose to speak about the “college mismatch problem”: why American teenagers from poor backgrounds apply to universities of lower status than their grades could get them into; Ruby Takanishi from the New America Foundation and Rachel Gordon from the University of Illinois looked at what we are learning from universal preschool education.
There are major methodological innovations: the impact of learning analytics on the knowledge base for lifelong learning, what the evidence is saying about recent immigration and its consequences for education. But all this makes it sound too ordered. Opening the telephone directory programme randomly I find ”an Australian perspective on inequality and education”, “blacks, hip-hop and the sociocultural milieu”, “dental school deans’ perception of dental education costs”, ”does teacher and student race congruence help or hinder student engagement in ninth grade science”, “ the common core standards and teacher quality reform” , and “comparing three estimation approaches for the Rasch Testlet model”: and on and on through literally thousands of sessions.
It’s almost impossible to discern trends, though economists seem to be growing in number and influence; ‘big data’ and its promises and pitfalls pre-occupy more people; and even in America – that most inward looking of melting pots – questions of international comparison and globalisation are more than ever in evidence. Being at AERA is a reminder of the similarities and differences between American and English experience in education.
There are some common themes: the relationships between quality and equity, between social structure, education experiences and performance, between the dynamics of research and the dynamics of policy. Others look similar but are really different: academies, for example, are not, in the last analysis, quite the same as charter schools. Others are genuinely different: the American experience of urban school reform is not the English experience; America’s experience in curriculum reform and teacher education has been quite different from England’s.
AERA is always simultaneously disorienting – you inevitably feel you are in the wrong place, that there is a more interesting and important session just around the corner – and energising – thousands of exceptionally able and engaged people enthused about education, and above all reminding us that education research is part of a genuinely global discourse.

Research for all: a journal for all

By Blog Editor, IOE Digital, on 5 April 2014

Sandy Oliver

There is nothing unusual about academics and amateurs sharing and discussing their interests in learning. Professional and amateur stargazers debate the night sky, volunteers dig alongside archaeologists, biographers need readers and museums thrive with interactive exhibits.
Applied research such as medicine, communications or agriculture, elicits opinions about the focus, ethics and governance of research from people interested in the potential benefits and harms of new technologies or ways of working. All this is public engagement with research – where non-researchers are either contributing to the research, or debating or making use of the findings. Citizen science, engaged scholarship, patient involvement, public participation, practitioner research and many other terms describe activities which have overlapping principles and methods.
Sharing lessons between the disciplines and across policy sectors is difficult because we do not have a common language or shared understanding of what public engagement comprises and how it operates.
In the UK universities are supported by Research Councils UK and the National Coordinating Centre for Public Engagement  to encourage a growing culture of public engagement with research – by developing the “the myriad of ways in which the activity and benefits of higher education and research can be shared with the public”. This engagement is taken to be “a two-way process, involving interaction and listening, with the goal of generating mutual benefit”.
Staff at the Institute of Education have been part of this social movement. Universities across the UK and internationally have been opening up their ‘ivory towers’ and finding new ways to work with other people in organisations and networks where knowledge is valued for culture and for policy decisions.
For instance, university students who grew up in local authority care played a prominent role as workshop leaders and spoke movingly about their own challenging experiences at a conference reporting a study of them and 150 of their peers. Ultimately, the By Degrees study (pdf) led to the introduction of a bursary for care leavers who go on to higher education and encouraged many UK universities and local authorities to improve the support they offer care leavers.
On a lighter note, pupils and teachers were involved in pilot testing software (pdf) that would allow young people to make 3D adventure and puzzle games that are as satisfying to play as the ones they buy. Developing a common language was important for cross-disciplinary and cross-generational understanding of game design, and a new quality, commercial product.
Elsewhere, public debate about nanotechnologies (engineering on a molecular scale) illustrated how public engagement can: reveal public concerns and wishes; suggest new lines of enquiry; open science to public scrutiny; provoke reflection on the wider, social implications of scientific developments; and help scientists and the public develop new skills and mutual appreciation.
Ironically, despite holding similar principles, academics who are applying them in various areas for different purposes are often working in isolation, unaware that other enthusiasts are down the corridor or in neighbouring universities. Now, discussions between the eight universities with RCUK public engagement ‘catalyst’ funding, and the NCCPE, have inspired plans for an international journal for academics and others interested in research.
This journal, to be launched by IOE Press, will focus on the role of academic research in society at large, and the role of society at large in academic research. It will publish empirical research and critical analyses of public engagement with research across all academic disciplines; opinion pieces from public perspectives and engagement intermediaries; and reviews of books and events. It is a forum for sharing the learning from research and practice that crosses boundaries between research and the wider world, across academic disciplines and policy sectors.
The journal will consider the questions academics ask about how to choose between different publics and different methods depending on the context of their research projects, and the consequent impact on the research and those involved. It will consider the questions asked by outsiders wishing to engage academics in research – how to read research, news or commentaries with a critical eye, navigate university structures, and inspire academics with new agendas. Lastly, it will consider the systems and cultures that support or block academics and the public learning from each other.
Typical of this area – where choice of language reveals assumptions, cliques and fashions – an appropriate name for such a journal remains elusive. The vision is to bring together the wisdom of academics, practitioners, Science Technology Engineering and Maths (STEM) Ambassadors and all manner of engaged publics. Their task will be to shape a ground-breaking journal – and find a name.

Brain science and education: seeking the right connections

By Blog Editor, IOE Digital, on 28 February 2014

 Kevan Collins
What can neuroscience tell us about education? This question elicits a wide range of responses from teachers, neuroscientists and educators – from the pessimistic “Nothing. What can a brain scan tell a teacher about what to do with a difficult Year 9 class?” to the very optimistic “Everything! If only I could see what my pupils were thinking I’d know what to do”.
At the Education Endowment Foundation (EEF), we believe the answer is likely to be much more complex and that the potential for this area of work is huge. That’s why we have joined with the Wellcome Trust in a £6m Neuroscience and Education funding initiative.
Neuroscience has already confirmed that certain existing educational practices have an impact on the brain. But it can also prompt questions about how best to implement them. “Spaced Learning” – the idea that it is better to split the time spent learning something into several short bursts rather than learning it in one large block – has long been used in education. However, recent neuroscience research suggests that the positive impact may be due to the additional time spent thinking about the material. This raises questions such as: How long should you leave between sessions? And what type of activity should you do in between?
Neuroscience can also provide us with fresh ideas for educational approaches. One such idea is that of uncertain reward and computer games. It seems that one of the reasons that computer games are so compelling is that they employ “uncertain reward”. Sometimes an action is rewarded and sometimes it isn’t, so whether you progress in the game is partly down to skill and partly down to luck.
This type of reward structure stimulates a much greater dopamine release in the brain than completely certain or completely uncertain reward. Dopamine is important for motivation, but also for memory formation – a combination that would surely prove useful educationally. Learning games that employ the same reward structure as commercial games are being developed [link] and could provide a powerful learning tool.
Ensuring that neuroscience can usefully inform education requires the involvement of many groups: neuroscientists, psychologists, educational researchers and teachers, for example. It also requires the careful use of evidence throughout the process:

  • firstly looking at the evidence generated by neuroscience to select those ideas most likely to be useful to education;
  • then ensuring these ideas are applied within the classroom in a way that is both feasible and supportive of teaching and learning approaches identified as effective by educational research;
  • and lastly the neuroscience-informed approach or intervention needs to be rigorously evaluated.

It is this process that the Wellcome Trust and EEF are hoping to facilitate through the new funding initiative.
Many practitioners are excited by the idea that neuroscience could influence education; indeed a recent Wellcome Trust survey found that eight out of ten teachers would collaborate with neuroscientists doing research in education. This is very encouraging. However, it’s also important that ideas are not adopted before they have been rigorously tested, and that their neuroscientific basis is sound.
There is then potential for neuroscience to inform education, enhancing current practice and providing new ideas. Developing educational interventions truly informed by neuroscience would also stop unproven commercial products from filling the current gap in the market. The EEF and Wellcome Trust funding initiative intends to generate evidence about the impact of existing neuro-informed educational interventions, as well as funding some more developmental projects to develop and pilot new approaches. Generating a much larger body of evidence to provide an answer to our opening question and hopefully identifying approaches that raise the educational attainment of young people.
More information on the funding round can be found here, including a literature review that discusses other ideas from neuroscience that could be applied within education. 
Kevan Collins is Chief Executive of the Education Endowment Foundation and a Visiting Fellow at the IOE

Time to re-think the unthinkable: how can we get our research messages discussed by politicians?

By Blog Editor, IOE Digital, on 23 September 2013

Chris Brown

The party conference season is a useful barometer for those who champion the more widespread use of evidence within policy making. Among the announcements and denouncements, we start to get an understanding of the gamut of policy positions being developed by the main political parties and, importantly, by those who advise them. These are, to use the ancient Greek idea, the nascent policy “agoras” (pdf), or gathering places for policy.
They matter because they illustrate that whoever wins the election will have already devised their manifesto for government. This positioning of perspectives will also frame the nature of the evidence policy-makers will or won’t engage with once in office. Clearly the scope of any policy agora (the breadth of the arguments it contains) depends on the extent to which ministers wish to let their civil servants investigate potential solutions for particular policy problems. But if the trend set by the current education secretary continues, then the positioning both of evidence and of those who offer advice worth listening to, is something that will need to happen long before the electioneering for 2015 has even commenced.

The year ahead, as a result, represents the period when we can work with potential future governments to re-think the unthinkable: to champion new ideas at the expense of the current ones and to reposition the country’s journey over the course of the next electoral cycle. This of course takes time and effort, but it also requires an understanding of the appropriate strategies to employ.
Historically academics, in addition to their day to day business of writing journal articles, have been encouraged to ensure that their research outputs are both digestible and applicable: that what they write can not only be easily understood, but that it is also immediately ‘policy ready’. Often efforts to do so result in frustration. This is because while useful, these two qualities alone are unlikely to lead to a greater uptake of research by policy-makers: ideas may still sit outside of the policy-agora or policy-makers may simply fail to see any need to act on what is presented. Importantly, then, what is also required is substantial ground-work to enhance the “social robustness” of any idea – to promote its importance and the need to act as a result.
Efforts to enhance social robustness can be directed via the general media, social media or through cultivating links with special advisors and others who matter, but the ultimate endgame of this action is to advance ideas towards what Malcolm Gladwell describes as the “tipping point“: ensuring issues enter and dominate the mainstream and so must be addressed.
As well as relating to general ideas, however, we can also direct similar efforts towards promoting ourselves as experts, whose advice should be sought. Again, the result is the same, with those considered to be worth listening to finding it easier to catch the ears of policy makers than those who are not (as can be evidenced by those particularly skilled in this approach – Ben Goldacre for instance, provides a prime example of what can be achieved here). So let’s watch this week’s Labour Party conference with interest and see if we can assess not only which of our research might be in favour, but also whether there is scope for enhancing the social robustness of the messages that are not – and make sure they are ready in time for next year.
Dr. Chris Brown’s new book, Making Evidence Matter, published by IOE Press, is out now

How boom times for pure research bred the Bomb

By Blog Editor, IOE Digital, on 29 August 2013

Paul Temple

A recent biography of J Robert Oppenheimer (there have been six since 2004 alone), Inside the Centre: The Life of J Robert Oppenheimer (2012), by Ray Monk, Professor of Philosophy at Southampton University, makes you think about both twentieth-century scientific history and current national policies.

Oppenheimer’s story has much to interest students of higher education. Apart from his time in charge of building the first atomic bomb at the Los Alamos laboratory from 1943-45 – when he showed himself to be an outstanding leader of a disparate group of brilliant egoists – Oppenheimer spent his working life as a university teacher, researcher and administrator, latterly as the Director of Princeton’s Institute for Advanced Study, where his staff at one point included Einstein, Bohr and Dirac.

A point of particular interest is that Oppenheimer’s academic career spanned the period during which Europe, as a result of self-inflicted wounds, ceded world scientific leadership to the United States. When Oppenheimer graduated from Harvard in 1925 (in chemistry, not physics – a reminder that the disciplinary boundaries we now take for granted have only recently become so rigid), bright young American scientists wanting to work with the world’s best researchers crossed the Atlantic as a matter of course. In theoretical physics, Oppenheimer’s chosen research topic, the choice was between Germany, particularly Göttingen and Leipzig, and England, particularly Cambridge.

Oppenheimer began with Cambridge, where he was unhappy (though it was not, he wrote, “quite so bad as Oxford”), and then in 1926 went to work with Max Born, one of the leading figures in quantum mechanics, at Göttingen, receiving his doctorate there in 1927 – and no doubt improving the University’s doctoral completion statistics in the process. The international language of science was then significantly German, in which Oppenheimer, from a German-Jewish family, was fluent: presumably other ambitious American scientists learned German as a matter of course, in the way that German academics now learn English. (Oppenheimer was in any case a gifted linguist: he learned Sanskrit in the 1930s in order to read Hindu literature.)

Several factors came together to allow America to build an atomic bomb in a stunningly short period. It took about three-and-a-half years from approval of what became the Manhattan Project to the first use of an atomic bomb, at Hiroshima on 6 August 1945. But the crucial period, from when the first scientists arrived at Los Alamos to the “Trinity” test in the New Mexico desert on 16 July 1945, lasted 28 months: perhaps what you’d allow for the implementation of a modest IT project in a university today. But the Manhattan Project was able to build on the world’s best physics and engineering research, created in American universities in the 1930s – Berkeley and Chicago in particular – largely with public funding for the purest of pure research. Through the 1930s, for example, Berkeley seemed to have no particular difficulty in obtaining funding to build ever more powerful cyclotrons, with no practical aim in view: nobody seems to have asked them for an impact statement.

American universities, and later the Manhattan Project, also took full advantage of talent sucked in from Europe, particularly Jewish refugees from fascism in Germany, Hungary and Italy. Even Britain took in foreigners: Rudolf Peierls and Otto Frisch, both German-Jewish refugees, worked at Birmingham University in the 1940s and made a vital contribution to building the bomb by showing that the amount of uranium-235 needed to sustain a chain reaction was a matter of kilograms, not tons as had been thought. This insight made the bomb a practical proposition – though even America struggled at first to produce even a few grams of the stuff. Around the same time, in one of history’s happiest mathematical errors, Werner Heisenberg (of “uncertainty principle” fame), running the Nazi atomic bomb project, made the same calculations but got an answer in tons, and so abandoned the uranium-235 method. Lucky that nobody thought to check his sums.

A number of things allowed the Manhattan Project to succeed, but large-scale, long-term public funding for blue-skies research, together with a policy of grabbing talent from wherever it could be found, and a strong manufacturing economy, were all crucial. Might there, just possibly, be any lessons from this for policymakers today?

Oppenheimer’s loss of his security clearance in 1954, at the height of the McCarthy witch-hunts, was devastating for a man with a strong sense of national duty. There are several ironies here. One is that, while Oppenheimer’s politics were certainly left-wing, he was notably clear-eyed about the Soviet Union, concluding as early as 1947 that negotiations with Stalin over the control of nuclear weapons would be a waste of time. And, just as past outstanding service to the Soviet state was no guarantee of one’s future safety, so the fact that Oppenheimer had given America the bomb (“What more do you want? Mermaids?” a friend asked at his Security Board hearing) did not help him in countering the FBI’s obsession about his political unreliability. There is a depressing contrast between this cold war paranoia and the open, international scientific culture which Oppenheimer had known before the war. Princeton’s refusal to bow to pressure from Washington to sack him must have been a consolation of sorts. The traditions of institutional autonomy and academic freedom, which had served America so well in the Manhattan Project, came to Oppenheimer’s rescue at the lowest point in his career.

The picture on the dust-jacket of Monk’s book shows Oppenheimer writing equations on a blackboard, cigarette in hand. He died of throat cancer in 1967. He was 62.


People having a pop at PISA should give it a break…

By Blog Editor, IOE Digital, on 30 July 2013

John Jerrim

For those who don’t know, the Programme for International Student Assessment (PISA) is a major cross-national study of 15 year olds’ academic abilities. It covers three domains (reading, maths and science), and since 2000 has been conducted tri-annually by the OECD. This study is widely respected, and highly cited, by some of the world’s leading figures – including our own Secretary of State for Education Michael Gove.
Unfortunately not everyone agrees that PISA is such an authoritative assessment. Over the last month it has come in for serious criticism from academics, including Svend Kreiner (PDF) and Hugh Morrison (PDF). These interesting and important studies have been followed by a number of media articles criticising PISA  – including a detailed analysis in the Times Educational Supplement last week.
As someone who has written about (PDF) some of the difficulties with PISA  I have read these studies (and subsequent media coverage) with interest. A number of valid points have been raised, and point to various ways in which PISA may be improved (the need for PISA to become a panel dataset – following children throughout school – raised by Harvey Goldstein is a particularly important point). Yet I have also been frustrated to see PISA being described as “useless”.
This is a gross exaggeration. No data or test is perfect, particularly when it is tackling a notoriously difficult task such as cross-country comparisons, and that includes PISA. But to suggest it cannot tell us anything important or useful is very far wide of the mark. For instance, if one were to believe that PISA did not tell us anything about children’s academic ability, then it should not correlate very highly with our own national test measures. But this is not the case. Figure 1 illustrates the strong (r = 0.83) correlation between children’s PISA maths test scores and performance in England’s old Key Stage 3 national exams. This illustrates that PISA scores are in-fact strongly associated with England’s own measures of pupils’ academic achievement.
Figure 1. The correlation between PISA maths and Key Stage 3 maths test scores
Source: https://www.education.gov.uk/publications/eOrderingDownload/RR771.pdf page 100
To take another example, does the recent criticism of PISA mean we actually don’t know how the educational achievement of school children in England compares to other countries? Almost certainly not. To demonstrate this, it is very useful to draw upon another major international study of secondary school pupils’ academic achievement, TIMSS. This has different strengths and weaknesses relative to PISA, though at least partially overcomes some of the recent criticisms, with the key point being – does it tell us the same broad story about England’s relative position?
The answer to this question is yes – and this is shown in Figure 2.  PISA 2009 maths test scores are plotted along the horizontal axis and TIMSS 2011 maths test scores along the vertical axis. I have fitted a regression line to illustrate the extent to which the two surveys agree over the cross-national ranking of countries. Again, the correlation is very strong (r = 0.88). England is hidden somewhat under a cloud of points, but is highlighted using a red circle. Whichever study we use to look at England’s position relative to other countries, the central message is clear. We are clearly way behind a number of high performing East Asian nations (the likes of Japan, Korea and Hong Kong) but are quite some way ahead of a number of low and middle income countries (for example Turkey, Chile, Romania). Our exact position in the rankings may fluctuate a little (due to sampling variation, differences in precise skills tested and sample design) but the overall message is that we are doing okay, but there are other countries that are doing a lot better.
Figure 2. The correlation between PISA 2009 and TIMSS 2011 Maths test scores
Source: Appendix 3 of http://johnjerrim.files.wordpress.com/2013/07/main_body_jpe_resubmit_final.pdf
I think what needs to be realised is that drawing international comparisons is intrinsically difficult. PISA is not perfect, as I have pointed out in the past, but it does still contain useful and insightful information. Indeed, there are a number of other areas – ‘social’ (income) mobility being one – where cross-national comparisons are on a much less solid foundation. Perhaps we in the education community should be a little more grateful for the high quality data that we have rather than focusing on the negatives all the time, while of course looking for further ways it can be improved.
For details on my work using PISA, see http://johnjerrim.com/papers/

What value a doctorate?

By Blog Editor, IOE Digital, on 24 June 2013

Richard Freeman
As Programme Leader for Researcher Development here at the Institute of Education, I focus on developing researchers themselves rather than their research. So, I was very interested to see earlier last month a report published by Vitae: What do researchers do? Early career progression of doctoral graduates. In the ‘Age of Austerity’ it was understandable that coverage of this report in the media (e.g. Times Higher Education) focused on the financial benefits of gaining a PhD. Doctoral graduates were seen as more ‘recession proof’. Indeed, the Foreword referred to how the report “provides evidence of the employability and value of doctoral graduates compared with masters and good first degrees. It shows that doctoral graduates continue to enjoy a salary premium relative to those with Masters and good first degrees. Furthermore, the incomes of doctoral graduates’ have broadly kept pace with overall UK growth in earnings, whereas those of holders of Masters and first degree have fallen back.”
The Doctoral School here at the Institute of Education is responsible for over 900 research students, putting it in the top ten in terms of size of the 59 institutions reviewed in the recent Quality Assurance Agency for Higher Education (QAA) Outcomes from Institutional Audit: 2009-11: Postgraduate research students. For us the support we provide for research students is an essential part of making the PhD worthwhile, not just financially, but in helping the doctoral graduate to become who they want to be throughout their life.
So for me, the most striking results in the report are those when respondents were asked about their career satisfaction and the value (non-financial) of the doctorate. In the report 92% reported high levels of career satisfaction and 86% felt that their doctoral degree experience prepared them well for or helped them progress towards their career aspirations. More than 85% felt that their doctorate had enhanced the quality of their life generally. Less than 10% felt that their doctorate did not enhance their social and intellectual capabilities beyond employment.
Most interesting were the results for Arts & Humanities doctoral graduates, who fared worse in employability compared to their peers in other disciplines. Despite that, fewer than 6% believed that the quality of their life generally had not been enhanced by their doctoral degree experience. This aspect of doctoral study can be easily overlooked if doing a PhD is seen only as toiling away in solitude to become, say, the world expert on an obscure Romanian painter. In fact, the experience of doing a PhD is very different now compared to fifty years ago when the student’s relationship with their supervisor was paramount – the ‘Secret Garden’ of supervision as it has been called. Now, the PhD is far more structured with support for the development of specific research skills as well as more generic skills such as developing skills in critical thinking and public engagement.
It is important that we recognise the value of education to society, but also to the individual. For the doctorate student, this is revealed by this survey, which is reassuring for all of us who work with doctoral students.
Vitae: What do researchers do? Early career progression of doctoral graduates
The report presents results from doctoral graduate respondents to the HESA ‘Longitudinal’ Destinations of Leavers from Higher Education (LDLHE) survey in November 2010. This survey considered UK and EU graduates’ circumstances, employment outcomes and attitudes around three and a half years after they had graduated. These results are compared with similar respondents to an equivalent survey from 2008.
Dr Richard Freeman is Programme Leader for Researcher Development at the IOE.

Evidence-based practice: why number-crunching tells only part of the story

By Blog Editor, IOE Digital, on 14 March 2013

Rebecca Allen
As a quantitative researcher in education I am delighted that Ben Goldacre – whose report  Building Evidence into Education was published today – has lent his very public voice to the call for greater use of randomised controlled trials (RCTs) to inform educational policy-making and teaching practice.
I admit that I am a direct beneficiary of this groundswell of support. I am part of an international team running a large RCT to study motivation and engagement in 16-year-old students, funded by the Education Endowment Foundation. And we are at the design stage for a new RCT testing a programme to improve secondary school departmental practice.
The research design in each of these studies will give us a high degree of confidence in the policy recommendations we are able to make.
Government funding for RCTs is very welcome, but with all this support why is there a need for Goldacre to say anything at all about educational research? One hope is that teachers hear and respond to his call for a culture shift, recognise that “we don’t know what’s best here” and embrace the idea of taking part in this research (and indeed suggest teaching programmes themselves).

It is very time-consuming and expensive to get schools to take part in RCTs, (because most say no). Drop-out of schools during trials can be high, especially where the school has been randomised into an intervention they would rather not have, and it is difficult to get the data we need to measure the impact of the intervention on time..
However, RCTs cannot sit in a research vacuum.
Ben Goldacre does recognise that different methods are useful for answering different questions, so a conversation needs to be started about where the balance of research funding for different types of educational research best lies.
It is important that RCTs sit alongside a large and active body of qualitative and quantitative educational research. One reason is that those designing RCTs have to design a “treatment” – this is the policy or programme that is being tested to see if it works. This design has to come from somewhere, since without infinite time and money we cannot simply draw up a list of all possible interventions and then test them one by one. To produce our best guess of what works we may use surveys, interviews and observational visits that took place as part of a qualitative evaluation of a similar policy in the past. We also used descriptions collected by ethnographers (researchers who are “people watchers”). And of course we draw on existing quantitative data, such as exam results.
All of this qualitative and quantitative research is expensive to carry out, but without it we would have a poorly designed treatment with little likelihood of any impact on teacher practice. We may find that, without the experience of other research, we might carry out the programme we are testing poorly for reasons we failed to anticipate.
The social science model of research is not ‘what works?’ but rather ‘what works for whom and under what conditions?’
Education and medicine do indeed have some similarities, but the social context in which a child learns shapes outcomes far more than it does the response of a body to a new drug. RCTs may tell us something about what works for the schools involved in the experiment, but less about what might work in other social contexts with different types of teachers and children. Researchers call this the problem of external validity. Our current RCT will tell us something about appropriate pupil motivation and engagement interventions for 16-year-old teenagers in relatively deprived schools, but little that is useful for understanding 10-year-old children or indeed 16-year-olds in grammar schools or in Italian schools.
The challenge of external validity cannot be underestimated in educational settings. RCTs cannot give us THE answer; they give us AN answer. And its validity declines as we try to implement the policy in different settings and over different time frames. This actually poses something of a challenge to the current model of recruiting schools to RCTs, where many have used “convenient” samples, such as a group of schools in an academy chain who are committed to carrying out educational research. This may provide valuable information to the chain about best practice for its own schools, but cannot tell us how the same intervention would work across the whole country.
Social contexts change faster than evolution changes our bodies. Whilst I would guess that taking a paracetamol will still relieve a headache in 50 years’ time, I suspect that the best intervention to improve pupil motivation and engagement will look very different to those we are testing in an RCT today. This means that our knowledge base of “what works” in education will always decay and we will have to constantly find new research money to watch how policies evolve as contexts change and to re-test old programmes in new social settings.

Open access publishing: go for green, not gold

By Blog Editor, IOE Digital, on 21 February 2013

Paul Temple
The digital revolution is changing the world in often unexpected ways, although sometimes widely-predicted changes can take longer to appear than first suggested. “The end of the book” has been predicted for decades now: while UK physical book sales were still running at £1.5bn in 2012, they seem now to be in a slow, long-term decline in the face of e-book sales. The value of UK recorded music sales is also showing steady decline for the same reason – digital competition.
The world of academic journal publishing looks as if it is now about to undergo the same sort of digital shock-treatment. Academics have long grumbled about the handful of profit-making publishers – often, very handsome profits – (Elsevier; Palgrave-Macmillan; SAGE; Springer; Taylor and Francis; Wiley-Blackwell) which dominate the journal market. The Economist recently said that these firms possess “that rare thing in the media industry: a licence to print money”. That’s because they can get their authors and editors to work for nothing, and then sell the finished products back to the universities that mostly employed the authors and editors in the first place.
When journals were entirely printed artefacts, found mainly on the shelves of university libraries, commercial publishers arguably provided expertise in their production, quality control and distribution – although journal publishing had begun as a university-run activity, which over the years had typically been handed over to commercial firms, with their economies of scale. But now the internet seems to be putting the boot on the other foot. The unlovely term “disintermediation” is key here: the internet can cut out the middle-person, making it practicable for (in this case) academic research to be made directly available to anyone who fancies reading it. The editorial and peer-review costs need to remain, but these hidden subsidies to the publishing industry can now be brought in-house by the universities. Copy-editing, typesetting, indexing and so on also still have to be paid for: but again, digital technologies have simplified most of these processes.
Naturally, the big journal publishers have not taken this existential threat to their businesses lying down. They are pushing the model of the “gold” open access, online journal, which will be free for all to read – but funded by the authors paying “article processing charges”, or APCs. It is thought that these will average out at around £1,500 per paper. So the publishers’ income will still come from (mainly) the universities, but in the form of APCs rather than journal subscriptions. Better still, APCs will probably be easier and cheaper for the publishers to collect – no more sales people trudging from university to university to sell subscriptions – and university staff will still be doing the unpaid peer review and editorial work. If you’re a publisher, what’s not to like?
Luckily for universities, fee-paying students and the taxpayer (because most of the journal publishers’ profits ultimately come from these sources) there is an alternative. The “green” open access model publishes papers in online journals after the usual peer review and editorial processes, without APCs. The green journals can be owned and managed in a variety of ways, but are essentially non-commercial. Naturally there are costs involved, but the total costs of the green publishing model will always be much lower than the gold alternative (the title is well-chosen) because the profits of multinational corporations don’t enter into the equation. Instead, green publishing will get research into the public domain quickly and cost-effectively. By embracing green publishing, universities will reclaim one of their historic missions: producing and disseminating knowledge as a public good.

Confusion in the (social mobility) ranks? Interpreting international comparisons

By Blog Editor, IOE Digital, on 4 February 2013

John Jerrim 

Last Friday the Sutton Trust published a very interesting report questioning the validity of global educational rankings. Having written extensively on this subject myself I can only welcome this report as making an important contribution to policymakers’ understanding of international comparisons of educational attainment. Yet the report also brought to mind the robustness of cross-national comparisons of another area of great policy interest – social mobility.
Readers have probably heard that social mobility is low in the UK by international standards. A number of sensationalist stories have led with headlines such as “Britain has worst social mobility in western world and that “UK has worse social mobility record than other developed countries.
Leading policymakers have made similar statements. To quote England’s Secretary of State for Education Michael Gove: “Those who are born poor are more likely to stay poor and those who inherit privilege are more likely to pass on privilege in England than in any comparable country”.
But is this really the case? Are we sure social mobility is indeed lower in this country than our international competitors? Or is it the case that, just like global league tables of educational achievement, there remains great uncertainty (and misunderstanding) surrounding cross-national comparisons of social mobility?
The answer can actually be found by exploring a little further academic research that has been published on the Sutton Trust website. Figure 1 is taken from a Social Mobility Report published on 21 September 2012.
Figure 1: International comparisons of social mobility – Sutton Trust report 21st September 2012

Saving the technical details for another time, the longer the bars in this graph, the less socially mobile a country is. Here we see a familiar story; Britain ties with Italy as being the least socially mobile.
Figure 2, however, tells a different story. This is taken from another report published by the Sutton Trust just three days later.
Figure 2: International comparisons of social mobility – Sutton Trust report 24th September 2012
This graph plots a measure of income inequality (horizontal axis) against an economic measure of social mobility (vertical axis). Thus the closer a country is to the top of the graph, the lower its level of social mobility. Now, it appears that the UK may actually be more socially mobile than France, Italy and the US, and very similar to countries like Australia, Canada and Germany. Perhaps even more surprisingly, the UK is also similar to Sweden, Finland and Norway. Indeed, the only country that we can have any real confidence that the UK is significantly different to is Denmark.
Why is there such a contrast between these two sets of results? The trouble is, cross-national studies of social mobility have to rely upon data that are not really cross-nationally comparable. Rather, data of varying quality have been used in each of the different countries. Individuals are interviewed at different ages, using different questionnaires and survey procedures. Indeed, even different statistical analysis methods are used. No wonder, then, that social mobility in the UK can look very different, depending upon which dataset and method of analysis are used.
So although global rankings of educational attainment can be misleading, so can those of social mobility. In fact, problems with international comparisons of social mobility are often significantly worse. Yet this does not seem to stop journalists and policymakers making bold claims that “Britain has some of the lowest social mobility in the developed world“. Things are rarely so black or white in the social sciences – and social mobility is no exception. This uncertainty should be recognised when journalists and government officials report on social mobility rankings in the future. Otherwise, I fear for the credibility of this extremely important social issue.