Effective engagement: what do ‘engagement metrics’ actually mean for behaviour change?
By Artur Direito, on 12 October 2017
By Dr. Ben Ainsworth
Research Fellow in the Centre for Clinical and Community Applications of Health Psychology, University of Southampton
Anyone who works in digital behaviour change knows the value of ‘metrics’. The apps, websites and gadgets that we use to encourage healthy behaviour churn out reams and reams of numbers, such as ‘the total time a person spent using our website’, that are then analysed and explored by researchers.
An important challenge that’s sometimes overlooked is what these metrics actually mean for the person using the intervention. It’s no use creating a website that people use frequently if it doesn’t make any difference to their actual behaviour (Dr Camille Short noted this last month).
In fact, focusing on increasing ‘engagement metrics’ can even be misleading. For example, if a message is clear, easily understood and makes sense to the user, they may never need to log in again!
For example, the LifeGuide team (at the University of Southampton) have been working on a programme to improve the quality of life for people with asthma. When we asked people with asthma to try it, and tell us what they thought, one patient said “I’ve realised that using my inhalers every day helps my breathing, and I don’t think I’ll need to use the website again”.
This is called ‘effective engagement’ – the degree to which a user must ‘engage’ with an intervention in order to effectively change behaviour – and it’s an important concept in understanding how to design and implement a good intervention.
Recently, we analysed trial data from 9000 users of a four-session programme to prevent illness by increasing hand hygiene. Although most people logged in all four times (people in a trial are often quite keen!), almost all the change in behaviour occurred after just one session.
This was really useful information to have – it meant that when we developed an updated version of the intervention, we were able to focus it around just one session. We could remove the need for people to register, revisit etc – things that are often huge barriers to digital intervention uptake.
Of course, what works for one behaviour change programme might not work for another. This is why it’s important to consider the type of effective engagement that is needed, and – at least for those involved in research – how to measure whether it’s achieved or not.
Having all of this to think about can be a bit frustrating… which is why it’s so tempting to assume that ‘more logins = more engagement = behaviour change’. But people are so different, and so much more interesting, that it’s worth spending the time to really understand how digital interventions can impact behaviour, even if it takes a little bit more effort.
- Think about a behaviour that you might want to target, and what would constitute ‘effective’ engagement – how might you measure this?
- What are the challenges in identifying whether engagement is effective or not?
- Could individual differences in people affect what constitutes ‘effective engagement’? How could we identify and address this?
Ben holds an NIHR Post-doctoral Research Fellowship funded by an NIHR School of Primary Care Translational Award, at the University of Southampton. He also works as a Research Fellow in the LifeGuide Team in Centre for Clinical and Community Applications of Health Psychology. He’s interested in how to use digital interventions effectively to improve quality of life for people with chronic health conditions, and how statistical analysis of website data can inform this. He’s also on twitter! @benainsworth