X Close

IOE Blog


Expert opinion from IOE, UCL's Faculty of Education and Society


Researchers need to learn political skills if they want to make a difference

Blog Editor, IOE Digital24 August 2016

Titus Alexander. 
This week the government published its much postponed childhood obesity strategy, to a chorus of criticism from experts in public health. Doctors, health charities, and cancer and diabetes specialists have warned that the measures can’t stop the growing obesity crisis, which costs the NHS an estimated £4.2bn a year and is projected to cost £22.9bn per year by 2050. Simon Stevens, head of NHS England, has often said obesity will bankrupt the NHS unless action is taken now.
Researchers from CLOSER, a consortium of longitudinal studies led by the IOE, have documented the growing epidemic of obesity and concluded that the UK needs to target public health interventions at young people to stem the spread of obesity. Research into health promotion also shows what measures would reduce obesity. Government ministers and officials know this, and the evidence has been part of past consultations and guidance. But Sarah Wollaston, the Conservative MP who chairs the health select committee, says of the strategy: ‘big interests have trumped those of children’.
Bitter public health battles over tobacco, alcohol, pesticides and other enjoyable or useful (more…)

Understanding impact: what does it actually mean?

Blog Editor, IOE Digital9 May 2014

Chris Husbands 
Research changes lives. Somewhere along the way, every research project involves a question which is about making a difference to the way people think, behave or interact. In applied social science – the field in which the IOE excels – research sets out to understand, shape and change the world around us.
The idea of research ‘impact’ has become important in research management and funding. Universities are now held to account for the impact of their research, not least through the Research Excellence Framework. But ideas about ‘impact’ and how to secure it vary.
In popular accounts of science, research is sometimes described as ‘earth shattering’, as if it creates something like a meteorite crater reshaping the landscape for ever. An example might be Frederick Sanger’s development of rapid DNA sequencing in the 1970s, which has transformed practices across any number of fields.
But there are other images to describe impact. Not all research has what the LSE research team looking at impact call an ‘auditable’ consequence. They comment that research applied in practice is “always routinized and simplified in use” so that over time, the impact fades like ripples in a pond.
The University of the Arts talks of its research as having creative outcomes that enhance cultural life and provides examples of artworks which create possibilities and shift perceptions: ideas which float into the air like dandelion seeds.
The impact of some research is apparent quickly – though almost never as rapidly as the tabloid newspapers which almost weekly trumpet miracle breakthroughs would have us believe – whereas in other cases it can take decades before the value of research becomes apparent.
Not only does the IOE itself undertake research which seeks to have an impact, it’s also interested in understanding what impact looks like, what it means and how it happens. At a recent conference we explored the linked questions of research impact and public engagement: the relationships between research, policy, practice and improvement, are things some of my colleagues try to understand.
The ESRC defines research impact as “the demonstrable contribution that excellent research makes to society and the economy“. This suggests three components: the demonstrable nature of the contribution, the quality of the research, and the focus on both society and the economy. Successful impact means holding all three in creative relationship: without any of them, the other two are diminished. Research which is not excellent will not make a demonstrable contribution; research which sits unpublished and unread will not, whatever its methodological sophistication, make a demonstrable contribution and so on.
Understandings of impact – or of knowledge mobilisation and its dynamics – have been transformed over the last fifteen years, as the barriers to, and strategies for making creative use of research to impact more directly on people’s lives have become clearer and ways of engaging the public in the dynamics of research have developed. No research – however excellent – ever simply has an ‘impact’.
Richard Doll discovered that smoking caused lung cancer in the 1950s, but it took several years and active public health campaigns to change behaviour. In education, the gap between, say, research on assessment for learning (AfL) and AfL practice suggests that – like the idea of the droplet making ripples on a pond – the impact of research can quickly dissipate unless something active is done.
Research always needs mediating – or, to put it differently, research impact needs a plan. Academics used to talk about ‘dissemination’, but thinking has moved far beyond such models – “I research, you listen” – to more creative and nuanced understanding of the ways knowledge moves – and does not move – around organisations and society. We have learnt that while these relationships are complex, they can be managed effectively.
In the early days of work on research impact, thinking focused on ‘what works’, on the assumption that research could tell us what techniques have maximum effectiveness, and that this could in some way be taken to scale by more or less sophisticated dissemination techniques. We have become – individually, institutionally, collectively – more sophisticated than that, and we have done so quickly. We know that ‘how it works’ and ‘why it works’ are just as important and that the effort to link the worlds of research, policy and practice involve commitment and engagement from all parties. In Ontario, the Knowledge Network for Applied Education Research tries to draw key groups together to enhance knowledge mobilisation. Bringing about change in practices is never easy, as anyone who has ever tried to lose weight, get fitter or learn a new language knows.
There’s a nice social enterprise quotation: “success does not matter. Impact does”. The IOE is a socially engaged university. We care about the quality of the research we undertake, and we make sure that it is of the highest quality. But we care equally about the way our research shapes the society it is seeking to understand. We understand that research evidence will always be only one of the factors that influences society, and that other factors always intervene. But we also know that progress has been made in the past in this field and more can be made in future with persistent effort.
For us, ‘impact’ is not an artefact of the 2014 REF, nor an obligatory hoop through which to jump. There is a wonderful line from the 2008 strategic plan for Carnegie Mellon University – and very, very few university strategic plans contain quotable lines. But in 2008 Carnegie Mellon got it right: “we measure excellence by the impact we have on making the world better”.

Women publish less than men in the social sciences. Or do they…?

Blog Editor, IOE Digital22 August 2012

Karen Schucan Bird
We in higher education all know how important it is to publish our research. Recognition and reward are granted to productive scholars and their universities. But is there equal opportunity for all to succeed? With growing evidence from the material and life sciences that women publish less than men, I wondered whether other female social scientists and I were publishing less than we would expect. I sought to investigate.
To do so, I compared two sets of data: 1) demographic data of UK academics (pdf) to identify the proportion of social scientists that were women (in 2003/4, this was 40%), and 2) a random sample of 202 journal articles published at a similar time, so that I could identify the proportion of articles authored by UK-based women. The logic that drove my analysis was simple: if 40% of social scientists were women, then we could expect that 40% of publications would be written by women.
I analysed the social sciences as a whole as well as focusing on particular disciplines: political science, economics, social policy and psychology. Traditionally, these disciplines  are gendered subject areas. Economics, for example, has tended to represent a “harder”, masculine area of social science, with high proportions of male academics and students. In contrast, social policy is traditionally considered a feminine field,  with high levels of female scholars and students.
My findings
Across the social sciences as a whole, women did not publish as many articles as we might expect. Whilst representing 40% of the social science community, women only contributed 32% of the sampled articles. A similar discrepancy was found in the more “masculine” disciplines. Whilst women made up 24% of political scientists in the UK, they only contributed 8% of the articles sampled. In economics, women constituted 22% of academics whilst writing 13% of the sampled articles. This latter finding, however, was not statistically significant (whilst the other reported findings were).
There were more optimistic findings elsewhere. In the “feminine” disciplines of social science, women’s publishing levels were proportionate to their representation in the field. In psychology, women constituted 43% of the discipline and wrote 43% of the sampled articles. Similarly, women made up 46% of social policy academics in the UK and contributed 53% of the articles sampled. In these disciplines it seems that women were able to publish at a level comparable to their male peers.
So, it seems, there were differences in men and women’s publication productivity. With the 2014 Research Excellence Framework (REF) looming over us, I can’t help but feel troubled by some of my findings. In the last quality assurance process (The Research Assessment Exercise, 2008), men were almost 40% more likely than their female colleagues (pdf) to be entered. If women are publishing less than men then a similar outcome may be repeated in 2014. How can we explain this and what does this say about the academy?
I speculate about three possible explanations:

  • Women’s research is not sufficiently recognised or valued by our universities or the academy. Understandings of “knowledge’’ and “scientific quality” privilege traditional, more “masculine” approaches to research that are more commonly undertaken by men. Particular disciplines such as social policy and psychology may provide a space in which alternative research approaches are accepted, valued and published.
  • Female academics may take on a greater proportion of the teaching and administrative roles within the academy. Thus, they have less time to dedicate to research and its publication than their male colleagues.
  • Women are actively seeking new opportunities to undertake research and dissemination activities that do not involve publishing in the standard ways. Perhaps journals and other conventional outlets for research are being replaced by new media (such as blogs) and alternative platforms.

Publishing is absolutely central to the academic world. If women are not publishing at a level comparable with their male peers, for whatever reason, then surely they are at a career disadvantage? I urge us all to watch and see whether our male and female colleagues fare differently in the forthcoming REF.
For more details see, “Do women publish fewer journal articles than men? Sex differences in publication productivity in the social sciences” Schucan Bird, K. Nov-2011 In : British Journal of Sociology of Education. 32, 6, p. 921-937.

Understanding impact: why relationships with users matter

Blog Editor, IOE Digital3 August 2012

Caroline Kenny

With the deadline for submissions to the 2014 Research Excellence Framework (REF) fast approaching for higher education institutions in the UK, and increased focus on engagement, for example through the Research Councils Pathways to Impact process, it seems a good time to review what we know about research impact.

 1.     What is impact?

The REF defines impact as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia. It includes, but is not limited to, effects on, changes or benefits to:

  • Activities, attitudes, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding;
  • Audiences, beneficiaries, communities, constituencies, organisations or individuals; and
  • Any geographical location whether locally, regionally, nationally or internationally.

It also includes the reduction or prevention of harm, risk, cost or other negative effects.

Not included are impacts on research or the advancement of academic knowledge (this is assessed within the “outputs” and “environment” elements of the REF) or impacts on students, teaching or other activities.

2.     Why is it important?

Aside from comprising 20% of the 2014 REF assessment, impact is important for many other reasons. Morally, taxpayers, funders and other stakeholders have rights to research that can influence, alter and change the social world. Moreover, decision-making informed by the best available research has a better chance of benefitting, and avoiding harming, people. It also reduces the chances of public money being wasted on unsuccessful interventions.

Engaging different stakeholders also provides many benefits to researchers and universities including: new perspectives on or approaches to work; new skills; as well as ensuring research is meaningful, timely and useful.

3.     Why is it controversial? 

Concerns have been raised about the possibilities of:

  • Prioritising certain types of research over others e.g. applied social research over more abstract philosophical/theoretical research.
  • Ignoring the value of “blue-sky thinking”.
  • Encouraging the use of narrowly focused or lower quality research in decision-making.

4.     What do we know about impact?

Getting research used in policy and practice is a complex process and we are still learning about the different ways that this happens.  From the research undertaken as part of the Evidence Informed Policy and Practice in Education in Europe (EIPPEE) project and wider work at the EPPI-Centre at the Institute of Education, we know that:

  • Research is only likely to be used if it is relevant to the needs of its potential users.  To be relevant, research should be clear and easily understood, of good quality, timely and available.
  • Research can be used in many different ways, ranging from directly informing policy and/or practice to the more indirect, or “conceptual” use, where it shapes attitudes, beliefs or understandings.
  • Research may not have an impact for a very long time and whether it does or not depends on many factors. The nature of the research is only one of these. Issues that affect potential users of research are also important.  For example: Do they have the skills to be able to find, understand or use the research effectively? And do they work in organisations that are receptive or willing to use research?
  • Most people focus on how research is packaged or communicated when trying to achieve impact. Existing knowledge tells us that this is not sufficient. Studies have demonstrated the “social” nature of research and the importance of researchers interacting with users to build relationships and trust. This not only increases the chances that research is relevant to these groups but also overcomes barriers relating to whether the research comes from a credible (and trusted) source.


Having an impact with research involves many factors; only some of which are down to the research itself. To increase the likelihood that research is used, we need more understanding about the different ways that research has impact and the effectiveness of different strategies to achieve it.  Evidence shows that we need to focus less on communicating research and more on developing relationships with users.  This reflects a shift from a very simplistic understanding of research impact where we just do the research and try and publish it, to one that better reflects the complexity of the decision-making process and the nature of the relationships between research and its use.