X Close

CBC Digi-Hub Blog

Home

Menu

Archive for the 'Difficult and priority issues in digital health' Category

Can digital interventions change behaviours?

Carmen E Lefevre9 November 2016

By: Sarah O’Farrell, social-cognitive psychologist and behavioural scientist 

Can digital interventions change behaviours? This question is often asked within the behaviour change community, and it’s a one that I’m very fond of discussing – probably because I enjoy taking a step back and challenging the question instead of answering it, which is always fun.

My position is that ‘digital’ is not an intervention, a tool, a product, or a solution, but rather that it is a channel, and as with all channels, it can be used either effectively or ineffectively. The question, “Can digital interventions change behaviours?” is akin to asking, “Can policy make society a better place?”, “Can advertising convince someone to buy a product?”, “Can schools prepare children for the demands of the working world?”  The answer always has to be, “It depends…”

It depends on the quality of what is being delivered through those channels of policy, advertising, and schools… Just as the quality of the intervention delivered through emails, apps, websites will determine whether or not the digital channel is used effectively.  A very helpful and constructive question to ask about digital interventions is: “What are the strengths of digital channels, and how do we play to them?”

To start with, we know that digital channels allow us to deliver highly targeted messaging to very precisely defined segments of a population, and that these messages can be timed to hit just when the audience needs them most. Always pass a McDonald’s on your way home? Your phone can send you just the inspiration you need to help you override the urge for French fries as you’re walking past the giant yellow arches.

This is a huge selling point of digital. Human behaviour is ‘of the moment’, and one of the reasons many people fail to stick to their goals is that they make plans and promises to themselves in a ‘cold’ state, but give up and give in in when they’re in a ‘hot’ state. Digital allows us an unprecedented opportunity to reach the right people, with just the right information they need to hear, at the time when they need it most.

Digital devices also allow for the continual collection of objective data, which can be used, amongst other things, as the basis for creating commitment contracts with an audience. Often, health professionals have to ask their patients whether they ran last night or practised diaphragmatic breathing on the way to work. Now, thanks to GPS, heart rate monitors, cameras and voice recognition, digital devices know whether or not you did what you said you would, and can either reward or reprimand you accordingly. Commitment devices have been around since the days of Greek mythology, and where Odysseus had to tie himself to his ship’s mast in order to resist the temptations of the sirens’ song, today we can tie ourselves to commitment contracts with our phones and computers, which know if we’ve ran our 5k after work, or not, and debit or credit us for our actions.

Digital channels allow us to reach millions of people at one time, for very little cost; enable us to connect with and support each other; and learn and share new knowledge in a way never before imagined. Interventions delivered digitally are unlikely to change the systemic determinants of wellbeing and behaviour in society, nor can are they likely to replace the deep insight, empathy and wisdom of working with highly skilled professionals.  However, if we design our interventions to play to the unique strengths of digital devices, keep our propositions simple, and ensure we are delivering a great user experience for people, then we should have a very effective channel-BCT combination.

 

BIO: Sarah O’Farrell is a social-cognitive psychologist and behavioural scientist. She has delivered change strategies in the areas of organisational design, digital communications and product experiences, and the re-design of physical spaces and built environments, with organisations and clients such as the UK Department for International Development, The Bartlett School of Architecture, Ogilvy & Mather, and The Cambodian Rural Development Team. 

Agile evaluation of digital behaviour change interventions

Carmen E Lefevre2 November 2016

By Robert West; Centre for Behaviour Change, University College London

Digital behaviour change interventions (DBCIs) are typically apps and websites that aim to achieve lasting behaviour change in users, for example stopping smoking, reducing alcohol consumption, increasing levels of physical activity or reducing calorie intake.

We want to know whether DBCIs are effective, and if so how effective and for whom, but have a major challenge in doing so.

On the one hand, we can’t tell whether there has been lasting change unless we follow up a sufficiently large sample for at least several months after they have started using the DBCI. Effect sizes are usually small so the sample typically needs to be in the order of hundreds.  We also need to have a suitable reference against which to compare whatever change has been observed to be confident that the change would not have happened anyway.

On the other hand, we need to be more agile in our evaluations of DBCIs. Rapid changes in the digital landscape mean that DBCIs that are effective in one year may not be effective a couple of years later. Moreover, DBCIs have many different components and we cannot do large scale RCTs with long-term follow up on all the different permutations to come up with a combination that works.

So what is the best we can do when faced with this challenge? Probably the starting point is to adjust our expectations about what is achievable, particularly when we are first developing the DBCIs. We are unlikely to be able to achieve the same level of confidence in the lasting effect of a DBCI as we can with less context-sensitive clinical interventions. Once we have accepted that, we can look for more agile evaluation methods which can give an acceptable degree of confidence in the robustness and generalisability of the findings. Each situation will be different but a few key approaches suggest themselves.

One is to use short-term outcome measures that are known to predict long-term outcomes quite well (e.g. short-term smoking abstinence rates or self-reported craving).

Another is to use designs such as A-B testing to compare different variants of DBCIs (e.g. here).

A third is to move away from classical statistical methods with a fixed ‘significance level’ (typically p<0.05) and sample size to a Bayesian decision making approach in which we accumulate data until we reach a threshold of confidence in effectiveness or lack of effectiveness.

Finally, and perhaps most importantly, from the very start of any evaluation we should monitor engagement and early markers of possible effectiveness for signs that the DBCI is not working. If, among the first 20 users, only a tiny proportion, are getting past the first screen of an application that requires extensive engagement, there is little point in continuing with the evaluation – something will need to be changed.

For a guide on the development and evaluation of DBCIs click here

 

BIO: Robert West directs UCL’s Tobacco and Alcohol Research Group and works closely with British Websites to develop and evaluate digital behaviour change interventions.