X Close

Improvement Science London

Home

Menu

Archive for the 'Uncategorized' Category

Why the NHS needs General Practice

By Martin Marshall, on 27 May 2014

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

At times of crisis it’s easy to hunker down, to become inward looking. But if general practice responds defensively to the major challenges of increasing workload, reducing funding and ill-informed criticisms by the media and politicians, then matters will get worse. It is time to go on the offensive, to clarify why a vibrant general practice is essential not only to individual patients and to the communities that they operate in, but also to the very survival of the NHS. Four roles that general practice fulfils, day in and day out, are particularly important.
First, general practice is the part of the NHS where uncertainty is acknowledged and risk is managed. In contrast to hospital practice, there is a low probability of disease in patients seen in general practice. It has been estimated that more than 60 per cent of presentations in general practice cannot be explained in terms of recognised disease processes. Policy makers incorrectly interpret this as GPs being over-skilled for much of their work but in doing so they fail to understand the level of sophistication required to make judgements about when to investigate and when to reassure. If GPs referred all people with potentially dangerous symptoms and signs to hospital, the NHS would implode in weeks. Occasionally GPs get it wrong, the vast majority of times they get it spot on.

Second, general practice is the part of the NHS where the interface between professionalised care and self-care is managed. Self-care of the symptoms and signs of ill-health is infinitely more common than care provided by health professionals. Minor changes in people’s help-seeking behaviours can have a massive impact on the use of NHS resources. There are therefore practical as well as philosophical reasons for encouraging a high level of shared care and informed self-management, particularly for people with long term conditions. Promoting self-care effectively requires the deep understanding that GPs have of individual’s health beliefs and the environment that they live in, as well as technical expertise in encouraging behaviour change. GPs play an essential role if policies promoting shared and self-care are to be delivered.

Third, general practice is the place where the up-stream determinants of health are recognised and managed. The environmental and behavioural determinants of ill-health, such as poor housing, unemployment, diet, exercise and stress are widely recognised but highly resistant to remedial action. As members of the communities that they serve, GPs have a deep understanding of what needs to be done as well as having the trust of patients to lead change. General practice is public health with a personal touch.

Finally, general practice is the part of the NHS where the tensions inherent in the multi-dimensional approach to quality are handled. Hospital specialists rightly focus on the clinical effectiveness and safety of the care that they provide – this is what patients want and need when they go to hospital. But someone in the health system needs to bring a balanced view of quality, managing what are sometimes trade-offs between good clinical outcomes and waiting times, between providing safe care and the costs of minimising risk, between meeting the preferences of individuals and ensuring fairness to everyone. Taking responsibility for these trade-offs is neither easy nor popular but GPs do it effectively every day.

General practice will not be able to continue carrying out these roles without a bigger share of NHS resources, without spending more time with individual patients, and without a workforce that has a high level of self-confidence and morale. This is why the RCGP’s campaign for the future of general practice, Put Patients First, requires everyone’s active support.

Rethinking Primary Care

By Martin Marshall, on 11 November 2013

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

Most people are inclined to what we might call a ‘provider bias’ when asked to describe how health care is organised and delivered. Familiar structures are embedded in our psyche – primary care, where generalist first-contact services are provided close to people’s homes; secondary care, where a wide range of specialist services are provided in general hospitals; and tertiary care, where a narrow range of services are provided in super-specialist hospitals. If the concept of ‘self-care’ gets any look-in, it is usually as an after-thought and rarely with much conviction.

But this neat world is being challenged, and not just because we are blurring the boundaries between traditional sectors. If the formal institutions making up the NHS are to survive in any form, we need to put greater emphasis on the informal systems that underpin them. At a recent seminar I heard Stewart Bell, Chief Executive of Oxford Health NHS Foundation Trust, one of the most experienced managers in the NHS and champion of things unfashionable, suggesting a radical change in terminology. How about this: Primary care is what people do for themselves to improve their health, like taking paracetamol when they have a headache or looking after their diabetes. Secondary care is what families, friends and members of the local community do for people when they are unwell, like providing a listening ear when someone is stressed, or reminding others to give up smoking and eat healthily. Tertiary care is what general practitioners, community-based nurses and other community practitioners provide for patients when they decide to utilise formal care. Quaternary care is what goes on in hospitals and quinary care is what happens in super-specialist hospitals.

Is this just playing with words? I don’t think so. Language is a product of the way that we think but also influences how we conceptualise what we see around us. It is strange that we seem to be more willing to restructure our buildings than we are to restructure our thinking but doing the latter might be more beneficial than the former. The interface between community and hospital services is important but the one between self-care and professionalised care has the potential to offer far more opportunities to improve the experiences, outcomes and value of care. This is the space in which people manage self-limiting conditions without recourse to expensive and sometimes damaging medical interventions, where people with long term conditions realise the evidence-based benefits of working as active partners with health professionals, rather than as grateful recipients of professional largess. Re-defining what we mean by ‘primary care’ puts patients first and raises the profile of self-care and shared-care in the consciousness of the health system.

The formal health system is important but in the greater scheme of things, not as important as it thinks it is. And it might have a greater impact on people’s health if it focused its considerable resources on helping people to deliver their own primary and secondary care

Dismantling Mantras

By Martin Marshall, on 30 September 2013

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

I’m not a GP or an academic because I like to conform, so it should come as no surprise that I can’t hear a mantra without wanting to challenge it. The quality improvement world is full of popular wisdoms, rehearsed and re-rehearsed by its enthusiastic followers. How about this one: “Data should be used for improvement, not for judgement.”

No shortage of experts in the field have differentiated between the characteristics of data used for improvement, accountability and research purposes. They tell us that data used for improvement can be ‘good enough’, that it is used to indicate rather than to reach definitive conclusions, that bias can be tolerated and that control charts allow us to attribute outcomes to interventions. In contrast, we are told that data used for accountability purposes need to be more rigorous, that it is used to reward or punish, that bias needs to be minimised. And for completeness we are told that data used for research purposes needs to be of top quality, as devoid of bias as possible and that analytical or inferential statistical tests are used to attribute outcomes to interventions. It all sounds very neat and reasonable.

But like all mantras it has a political purpose. Differentiating between improvement and judgement allows improvers to position themselves on the side of the angels. Improvement is benign, positive, enabling; accountability is malign, negative and damaging. Improvers make reasonable judgements, failure is not an option; holding people to account is unreasonable, done by people who don’t even understand the basics of common and special cause variation, never mind the intricacies of statistical probability.

Where the improvement world has come from is understandable but I don’t think that their position is sustainable. Nearly a quarter of a century after quality improvement techniques were introduced into the health sector from manufacturing industries it should be main stream but it isn’t. In most organisations only a small proportion of enthusiasts are engaged with using systematic and data-driven improvement activities. There are many reasons for this and scepticism about some claims of success that are made using poor quality data is one of them. Allowing questionable data to be used in questionable ways does not help to place an improvement philosophy and methods where they need to be – centre stage.

Anyway, it’s naive to suggest that we shouldn’t judge. I’m writing this blog on a train and the man sitting next to me is wearing a very dodgy yellow shirt. That’s a judgement. I’m sure that others in the carriage think he’s a fashion icon. And now you are making a judgement about my judgement. Judging is part of the human condition and what matters is not the judgement itself but the implications of the judgement – and the implications of using data inappropriately are significant.

Rather than service-based improvers, system managers and academics using different data, we should aim for convergence, using data as a common language in a way that allows everyone to focus on a common interest – improving value for individuals and communities.

So, we need to improve the quality of data we use for improvement so that better judgements can be made, and improve the quality of data we use for judgement, so that better improvements can be made. And no, that’s not a mantra, it’s a suggestion.

I’m a Health Service Researcher, get me out of here!

By Martin Marshall, on 14 June 2013

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

Science hasn’t always been as fashionable as it is today. It used to be seen as something that only pointy-headed weird people did, a bit scary to everyone else. But then along came Susan Greenfield and Brian Cox, science was popularised and stars were born. Why should actors and football players have a monopoly on fame? Science can be edgy and fun, a subject that school children look forward to with excitement rather than with dread.

So I wasn’t completely surprised to hear about the on-line resource I’m a scientist, get me out of here! http://imascientist.org.uk/ . The model will be familiar to fans of reality TV. School children go on-line during their science lesson and get to ask a panel of five real-life scientists whatever important questions they have on their minds. Like why does food go mouldy? Or can I clone my mum? Or why do the hairs on my arms stand up when I’m scared? Students submit questions which the scientists try to answer by the next day. They then have live online Facebook style chats, ask questions, learn more about what it is like to be a scientist, and let the scientists know their opinions. Questions and discussions take place over a two week period and then the voting starts. One scientist a day is evicted, perhaps because they provided the least convincing answers, or were just plain boring. After 5 days, and 5 harrowing (for the scientists) rounds, the last scientist standing wins £500 to spend on a science communication project.

Wouldn’t it be interesting if the answer to the ‘Dissemination’ section on a research grant application form said ‘We are going to play I’m a Health Service Researcher, get me out of here! with a group of managers and clinicians from our local hospital and general practices’. It would certainly make a change from ‘We will publish our findings in a highly ranked peer-reviewed scientific journal’. I can just imagine questions like how do we get our clinicians to follow guidelines? Or what are the implications of merging two hospitals? Or what impact do financial incentive have on professional motivation? The answers would be revealing and, like the school children, I doubt if your average manager would allow a Health Service Researcher to get away with answers that weren’t convincing or useful.

So there’s an idea for promoting evidence-informed service improvement. Anyone want to try it?

The dilemma of rigour and relevance

By Martin Marshall, on 3 April 2013

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

Only a few books have the potential to change the way we see the world but for me Donald Schon’s The Reflective Practitioner, is one of them. Schon describes two worlds, ‘the high hard ground’ where good research helps us to solve problems in a rational way and ‘the swampy lowlands’ where problems are messy and confusing and don’t seem to fit with the research evidence. Clinicians and managers working in the swampy lowlands of the NHS have to deal with the tension between these worlds, a tension which Schon refers to as ‘the dilemma of rigour and relevance’.

I was thinking about this dilemma last week at a meeting of our local Clinical Commissioning Groups. The purpose was to share learning from work being undertaken by each of the groups and it was one of the most stimulating meetings that I’ve been to in a long time. We heard about a project which has resulted in considerable savings by reducing the frequency of self-monitoring for patients with diabetes; we heard how the Quality and Outcomes Framework has dramatically improved blood pressure and cholesterol control; how the introduction of a new ECG telehealth service has improved care for patients with ischaemic heart disease and atrial fibrillation; and how a virtual ward initiative has reduced the rate of unplanned admissions at a local hospital.

For more than 2 hours not a whinge was heard about politics, poor morale or anything else. Critics who claim the NHS is in trouble and that CCGs are not up to their task would have been made to eat their words by the shear commitment and talent of the clinicians and managers who presented their work. You wouldn’t have believed that they were working for organisations that weren’t yet even legal entities. They presented ideas and interacted with each other as if they’d been around for years, and of course many of them have been in different guises and employed by different structures. The focus on content lent a healthy sense of continuity to the discussion.

Two things struck me about the meeting. With my NHS hat on I was convinced from the data presented that the work really was making a difference. I heard how services were being redesigned, could see trends heading in the right direction and was persuaded by figures clearly showing better outcomes. But with my academic hat on I wanted to ask the kinds of questions that researchers like to ask, sometimes helpfully but sometimes irritating. Like ‘are you sure that’s a trend’, or ‘is it statistically significant’, or ‘where are the controls’, or ‘do you know how these apparent changes are happening’, or ‘what’s your underpinning theory of change’, or ‘what does the published evidence say about this’. Irritating questions, as I say, but important ones if we are to move beyond successful projects to deeply embedded and sustained system-wide change.

It is neither possible nor desirable for academics to put the brakes on the kind of work that was presented at the meeting. But nor are we best serving the needs of our communities by ignoring the contribution that academics can make to service-based improvement. Schon’s dilemma is a real one that we need to solve through closer partnerships between decision makers in the health service and applied researchers.

Schon DA. The Reflective Practitioner: how professionals think in action. London, Temple Smith, 1983

Researchers in Residence

By Martin Marshall, on 4 February 2013

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

One of the challenges facing the science of improvement is the need to design and evaluate practical ways of narrowing the gap between those who produce research evidence and those who use it. The Researcher in Residence model is an approach which is stimulating a lot of interest.

The in-residence model is not new. Wimbledon has a Poet in Residence, Heathrow airport an Artist in Residence, the British Library an Entrepreneur in Residence and it is widely reported that Will Self is about to be appointed as the BBC Radio 4 Writer in Residence. The problem that the model is trying to address has sociological roots. People with expert knowledge or skills tend to seek, or sometimes just find themselves in the company of, like-minded people. A process of socialisation ensues which differentiates them from others without that expertise. This process allows them to develop their expertise in depth but also risks rarefying that expertise and prevents others from gaining access to it. The In Residence model attempts to bring that expertise back to the masses.

Researchers in Residence have been active in the field of education for nearly two decades but its potential in the health sector has yet to be realised.  Rare examples include the work of the anthropologist Paul Bate who provided organisational insights for staff at University College Hospital in London and the work of Martin Utley and Christina Pagel, academic modellers working with Great Ormond Street Hospital to help them to deal with a problem of patient flow through operating theatres.

The Researcher in Residence places the researcher in an unaccustomed active role – not a detached observer or even a passive participant, but a stakeholder in the success or otherwise of the initiative being implemented. Such a model challenges traditional perceptions of researchers as objective and remote from the endeavour under study. The researcher becomes a core part of the delivery team, bringing expertise which is different from but complementary to that of the managers and clinicians involved. By blurring traditional boundaries between experts, relationships change and power and influence is redistributed between stakeholders.

In the health sector, specialist expertise brought by the researcher might include a deep understanding of the published evidence and national and international experience in the field, a theory based appreciation of how to achieve change in organisations and in individual people, an understanding of the generic facilitators and barriers to improving quality (such as project design and planning, organisational context, how to embed and sustain change and the unintended consequences of change), expertise in how to assess whether and how an intervention is making a difference and finally a sophisticated understanding of how to use data in ways that produce new insights. Academics do not have a monopoly on any one of these areas of expertise but they have been specially trained to utilise them.

These areas of expertise are brought to the table by the researcher and their meaning and usefulness are actively negotiated with other members of the implementation team, rather than being ‘imposed’ or ‘transferred’ to them. Whilst this potentially conflicted role is complex and not without risks, the model is based on the belief that the advantages of increasing the likelihood of success of the project outweigh these risks. This is most likely to be the case if a researcher is used who has experience and the respect of their peers and who understands the complexity of the health service.

There are many unanswered questions about the Researcher in Residence model but it seems to me to have potential. If you have any experience of using the model, or are interested in exploring it further (particularly if you are interested in funding and establishing one in your own organisation), do get in touch.

Evidence-based management – an era of optimism?

By Martin Marshall, on 23 October 2012

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

Many factors influence the decisions made by managers and clinicians about how best to organise and deliver healthcare. Scientific evidence has traditionally played a small part but this has the potential to change, aided by the emergence of the science of improvement.

People responsible for improving how we implement what we know are now facing a similar challenge to that faced by those providing clinical care 20 years ago. Prior to the advent of the Evidence Based Medicine (EBM) movement, clinicians often did what they did because they had always done it that way, or because their teachers told them to do it, or because it felt like the right thing to do. EBM challenged this, introducing a more systematic approach to clinical decision making which drew less on personal experience and more on rigorous and systematic research evidence. Is there anything that the nascent evidence-based management (EBMx, if you’ll excuse yet another acronym) movement can learn from the now mature EBM movement?

Ten years ago, David Naylor, a clinical academic from the University of Toronto, reflected back on the first decade of EBM and described four eras in its development. First, the era of optimism framed the problem as a lack of knowledge and the solution as promoting a better understanding of research, in the belief that passive diffusion of knowledge would make a difference. This was quickly followed by the second era, the era of innocence lost and regained, when the sheer volume of the literature became clear and we saw the emergence of evidence summaries in the form of guidelines. The disappointing uptake of guidelines led to the third era of industrialisation, in which a massive investment was made to purposefully and sometimes aggressively promote their use. And now we have entered the era of systems engineering, drawing on human factors learning and focusing on clinical decision making as a task, that, like any other form of ‘work’ needs to be made as easy as possible with the aid of information technologies.

Where do we place evidence-based management on this evolutionary pathway? Pessimists might claim that the movement is not so much nascent as barely conceived. There are a few optimists around (mea culpa) but not many. Even on a good day, optimists will acknowledge that the implementation literature is even more vast, amorphous, inaccessible and full of gaps than the clinical literature. Efforts have been made by some, most notably the NHS Confederation, the SDO (Service Delivery and Organisation) R&D programme, the Kings Fund and the nine CLAHRCs (Collaborations for Leadership in Applied Health Research and Care) across England, to produce evidence summaries in areas such as care integration and hospital mergers, but the impact of these efforts on the decision making process is at best uncertain.

Whilst the idea of industrialisation or systems engineering of implementation evidence feels like a long way off, this is precisely where we need to head, and quickly. The challenges of scaling up and systematising the use of implementation evidence are significant, given the nature of the evidence and the readiness of the decision makers to operate differently, but they are not insurmountable. The investments made by the National Institute for Health Research into knowledge mobilisation and a second round of CLAHRCs will help.

But perhaps the biggest challenge is that managerial decision making will always be different from clinical decision making. By its very nature, it will always be less rational, more political, more influenced by pragmatism and ideology. Whilst the use of evidence will improve the decision making process, it will never dominate it. So perhaps we need to think about an additional era in its evolution, an era that we might call enlightenment version 2.0. In the eighteenth century, the enlightenment movement promoted science over rationalism and authority. Version 2.0 doesn’t turn the clock back, so much as acknowledge that in some realms of life decision making is more complex, imperfect and irrational than in others. We desperately need good scientific evidence to inform management decisions, but perhaps we need to think more creatively about how to generate and mobilise that evidence.

What have academics ever done for us?

By Martin Marshall, on 10 September 2012

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

I had a fascinating conversation with a group of senior health service managers and clinicians in North London recently. I had been invited to a meeting to discuss how the local health community could better integrate and coordinate services for vulnerable elderly people. As the meeting progressed, decisions were made to improve the ways in which information is transferred between health and social care, shift some services out of the hospitals into the community and introduce care coordinators – pretty straight forward decisions for those responsible on a daily basis for organising and delivering health and social care services.

Towards the end of the meeting I couldn’t help but reflect on what had influenced these decisions. Political pragmatism probably came top of the list (‘we couldn’t possibly do that, our GPs wouldn’t engage’), closely followed by personal experience (‘I saw care coordinators working really well in my previous job’) and a bit of ideology (‘what we need is more/less competition’).

Perhaps a little mischievously, I asked the group whether the decision-making process, or the decisions themselves, would have been any different if an academic had been part of the conversation. Some were unsure but most were dismissive. Well-rehearsed arguments were voiced – academics have their heads in the clouds, they don’t understand the need to make quick decisions, they are too purist, too nihilistic and they don’t speak a language that we understand. Stereotypes perhaps, but even as an academic myself, I wasn’t inclined to be too defensive.

But I did push them. ‘How about’, I said, ‘if you had a friendly academic whispering in your ear as you made the decisions, someone who really wants to be useful. What could they bring to the table?’. The conversation then opened up. One person said that they were vaguely aware that some research had been conducted into integrated care (there is a substantial international evidence base) and it would be useful if the academic could bring this evidence to the table and help interpret what it meant within the local context. Another person said that the need for change lay at the heart of the work and that whilst they had a good practical understanding of how to go about it, they were aware that there was a vast literature about the psychology of individual change and the sociology of organisational change and it would be helpful if this theory could be described in an accessible way. Someone else said that they wanted a more objective assessment of whether what they were planning was likely to have an impact, so they would like to draw on the academic’s expertise in evaluation. And a fourth person admitted that they had lots of data but they would like help to analyse or interpret it in more sophisticated ways and it would be useful to have academic advice.

And they could have gone on. It felt remarkably like a Monty Python ‘what have the Romans ever done for us’ conversation. Whilst clinicians and managers will always have responsibility for making decisions about how care is organised and delivered, and these decisions will always be influenced by factors other than the scientific evidence, academics have a role to play and this is what the science of improvement is about – helping decision makers in the service to make better use of the scientific evidence and helping academics to produce more useful research. The use of clinical research, through the evidence-based medicine movement, has become deeply embedded in the psyche of health professionals over the last two decades. We now need to develop and embed an equally convincing science to underpin health system improvement. This will be a significant cultural challenge but the stakes have never been higher.

Read all about it!!

By Martin Marshall, on 28 August 2012

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

There are some impressive quality improvement projects going on in the health service but the learning from them is rarely spread beyond the people directly involved. Quality improvement projects often make a difference but the improvers don’t seem to want to talk about it. In contrast, traditional research too often fails to have impact but researchers are pretty good at disseminating their work in leading journals.

The difference between ‘doers’ and ‘thinkers’ may be a stereotype but it cuts to the quick. Many quality improvers simply don’t think about publishing their work and some actively reject the scholarly tradition. A smaller number do want to publish but complain that journal editors aren’t interested.

They’re wrong. Take a look at an article in the 7th July issue of the BMJ about preventing venous thromboembolism (VTE). The BMJ has an impact factor of 14.093, for those who care about these things; it’s read by lots of clinicians and titillates the mass media, for those who don’t. A team from Johns Hopkins hospital in Baltimore, USA, carried out a prospective quality improvement programme in a single hospital which aimed to increase the proportion of patients who had their VTE risk formally assessed on admission to hospital, and were given preventative treatment where appropriate. Over a six year period from a base line in 2005, risk-appropriate VTE prophylaxis increased from 25% to 92% in medical patients and from 26% to 80% in surgical patients.

It’s not hard to see why the BMJ wanted to publish this work; so much about it is impressive. The project was designed and carried out in partnership between clinicians and managers in the hospital and a team of academics who have built an international reputation for their pragmatic but robust application of the science of improvement.  What brought the team together was a shared commitment to solving a practical problem which is known to cause significant complications, death and increased costs. They used an established framework to guide the project (the TRIP, or translating research into practice model) and a thoughtful set of theories about how they planned to change the mind sets and behaviours of practitioners and the environment of the organisation. They designed a set of evidence based interventions that combined technical (such as computer-based decision support) and social (such as pizza parties for the staff – honestly) elements and a judicious mix of ‘hard’ and ‘soft’ levers for change. They were self-critical, didn’t over-claim and demonstrated flexibility when their first approach was demonstrated to be ineffective. And they stuck at.

The end result is an elegant but honest account that others might not want to copy (would you prefer an evening in the pub or a pizza party?) but which they can learn from. It is the combination of scientific rigour and practical utility that brought this work to the attention of an international audience and it’s the same combination that will improve outcomes of patients. Simple and powerful.

Sounds good but is it science?

By Martin Marshall, on 18 July 2012

Professor Martin Marshall

Professor Martin Marshall

Lead, Improvement Science London

The English language is a fascinating thing; powerful in the hands of Shakespeare and Yeats, inadequate in the hands of, well, me. An uncomfortable light has been shone on my communication skills in recent months as I have tried to explain and promote the science of improvement. The people that I want to influence – decision makers in the health service and academics – bring a healthy mix of enthusiasm and scepticism to the table.  At the start of one of our first ‘engagement’ events I asked the participants whether they were familiar with and understood the term ‘improvement science’. To my surprise, the vast majority of people put up their hands. There then followed a forensic hour-long examination of both words, on their own and in combination, and a lively, sometimes heated, discussion. I had planned to repeat my straw poll but I wasn’t sure that I wanted to reveal what I suspect would have been a complete reversal of the earlier poll. The science of improvement may be an intuitively appealing term but it is a problematic one too.

Since that first meeting people have become more interested in the principles of improvement science and less inclined to search for a tight definition, but the question refuses to go away. ‘I really like the features of the science of improvement’, people say, ‘the ways in which it brings together the expertise of service based decision makers and academics, but is it really a science? Surely improvement is a goal, an ambition, perhaps even an imperative, but not a science?’

I want to rise to this challenge. I have no doubt that describing improvement in scientific terms is both reasonable and useful. I’ll justify this view by going back to the basics. Science is a way of knowing. It is not the only legitimate way of knowing (contrary to what some of my tutors told me at medical school), nor should it necessarily be described as the ‘best’ way of knowing, but it is the most rigorous approach to acquiring knowledge. It is characterised by systematic ways of thinking, using observation and/or experimentation to build theories and evidence that aim to produce knowledge that can be generalised or transferred beyond the location in which it was created. This contrasts with other common ways of acquiring knowledge such as those based on superstition, authority, intuition, rationalism or experience. Science has developed as a way of reducing the intrinsic biases associated with these popular and every-day ways of thinking.

So, let’s look at efforts to improve services at any level in the health sector, from large scale policy design to small scale practices in the front line.  You are likely to see some activities that are planned in a systematic way, that are informed by the best possible research evidence and social theory, that are based on rigorous observational, and sometimes experimental, data, and that aim to produce learning that others can reflect upon, and replicate or adapt. It’s difficult to argue that this isn’t science, unless you are one of those people who think science only happens in test tubes. But you will also see shoddy improvement activities, poorly planned, ignorant of the evidence, detached from consensus views about how change can be achieved, using misleading data, unwilling to consider the wider implications of their work and inclined to make exaggerated claims of impact. This, in the words of Ben Goldacre, is Bad Science.

The application of Good Science to improvement work benefits patients. Bad Science is at best misleading, at worst wasteful and damaging. So, let’s be proud to call Improvement Science.