X Close

Improvement Science London

Home

Menu

Evaluation: rigour, relevance and naivety

By Martin Marshall, on 25 June 2015

Professor Martin Marshall Laura Eyre

Martin Marshall, Professor of Healthcare Improvement; Laura Eyre, Research Fellow, UCL

 

 

 

 

The Nuffield Trust, like many similar organisations, is inundated with requests from people working in the NHS who want help to evaluate improvement initiatives; a subject already touched upon in a recent blog by the Nuffield Trust’s Alisha Davies.

These requests highlight a big problem; demand for evaluation is increasing but the supply of expert evaluators is limited. So the Nuffield Trust organised a conference bringing together evaluators, commissioners of evaluation and potential users of evaluation learning to try to find a solution.

The discussion covered three time-honoured questions that researchers often hope evaluation can answer.

  1. Does this intervention work?

This question is much loved by researchers and by passionate advocates of evidence-based practice. It is a simple question, compelling but troublesome. If you were to ask us this question then we wouldn’t need to know much about your subject area or your context to give you the answer. The answer is ‘it depends’. Or if you want a bit more detail, ‘sometimes, but not very much and not in any predictable way’.

There is no binary answer to this question because improvement interventions are often unstable, have social as well as technical elements, are implemented in complex (sometimes chaotic) organisational and policy environments and because the main variable is the human condition in all its beauty. To expect such interventions to ‘work’ or ‘fail’ is at best naïve.

  1. How does it work?

This is an important question for those who are taking part in an improvement project and for those who are interested in the generalisability of the learning to other settings. It is a question that evaluation funders are increasingly asking and for which qualitative researchers are adding enormous value. Attempts to answer this question are revealing a fundamental problem – we often don’t know what ‘it’ is or how ‘it’ is expected to make a difference. Addressing these questions is taking our understanding of improvement forward in leaps and bounds.

  1. How do we make it work better?

This third question is the most important for practitioners and the most challenging for evaluators. The concept of researchers bearing some responsibility for action is shocking to some academics but the field of participatory research has a long and honourable history in some sectors (education and community development in particular) and is underpinned by rigorous theories and methods.

For some reason the ‘objectivity’ of the biomedical sciences has encroached on the practice of social science in the health field. In doing so it has discredited an approach to evaluation which has the potential to increase the impact of science on service improvement. Thankfully, practical approaches to break down the boundaries between research and practice are emerging, such as the ‘Researcher-in-Residence’ model which was much discussed at the conference.

We are a long way from mainstreaming participatory approaches to evaluation in the way that has been achieved in the US and Canada. To achieve this requires evaluation commissioners, funders, universities and academic journals to think more broadly about the nature and the practice of science.

The Nuffield Trust conference represents the start of a long journey.

 

Leave a Reply