X Close

CBC Digi-Hub Blog

Home

Menu

A balancing act in digital health

By ucjujbl, on 1 August 2017

By Dr. Claire Garnett, Research Associate at the UCL Tobacco and Alcohol Research Group

stickman

Digital health interventions have the potential to play a key role in providing health care and behaviour change interventions to individuals. However, as academic researchers developing these interventions we are having to perform a balancing act between informing science and providing users with what they want.

Specific behaviour change theories are a good place to start with developing an intervention and can be used to inform the intervention in terms of its scientific content. Theories can provide a method for selecting intervention techniques and help researchers to understand what makes an intervention effective – the “mechanism of action”. This means that theories can not only inform the development of the intervention, but in turn the results of the evaluation of the resulting intervention can help inform theory.

However, if a digital intervention does not provide a great user experience, then it won’t be used –simple as that. So a certain emphasis must be placed on creating a product with a good user experience.

Keeping these two principles balanced – informing science and producing a great user product – is critically important though much harder to do (than I first thought, at least).

Users of apps and websites have very little patience for anything that isn’t easy to do or requires significant input on their part. This creates a conflict with the desire to test the proposed “mechanism of action” as having a screen with some additional questions to assess this is likely to result in your digital intervention being abandoned.

I was involved in the development and evaluation of a smartphone app to reduce excessive alcohol consumption – Drink Less (www.drinklessalcohol.co.uk). We decided that the risk of having users abandon the app due to a large user burden was too great as we were starting out from scratch and needed to build up our user base from zero. So we minimised the user burden by not including any measures to assess the proposed mechanisms of action for the intervention techniques.

Science does rely on assumptions and our proposed mechanisms of action did have evidence from existing research (e.g. experimental studies). However, most of these mechanisms had not been tested “in the wild”, or more specifically, in a smartphone app.

As the data started rolling in and the usage data stats were looking great, we were feeling positive. But there was a stumbling block when running our analyses – as we did not have a measure of the mechanism of action, it was not evident how to interpret our results.

For example, support for the null hypothesis (i.e. no detectable difference in past week alcohol consumption between normative feedback and a control version) could mean that the intervention technique was:

  1. Effective at changing the mechanism of action, but that this did not lead to the predicted behaviour change (i.e. the mechanism of action did not act as predicted)
  2. Ineffective at changing the mechanism of action and therefore, no change in behaviour.

As a researcher working on digital interventions, it seems that there are times when we need to rely on the assumptions about the proposed mechanism of action from previous studies. However, we do need to check these assumptions in the context of digital interventions too. One option would be to use the digital intervention in an experimental study, which means that there would be no fear of losing app users. However, this leaves us with the issue that the app hasn’t been tested in its “true” setting. Another option is that if you already have an app with a large user base, then you could start adding in additional questions to assess the proposed mechanism of action. Though you still run the risk of losing users and getting negative reviews, therefore reducing the chances of new users downloading your app.

It’s a difficult balance to strike, and one I’m not sure how best to achieve. If anyone’s got any insights or ideas, I’d love to hear them!

Questions to consider:

How do we balance informing science and collecting the necessary data to do so, with creating a digital product that provides a great user experience?

Is it necessary to test theoretical assumptions in a digital intervention that’s “in the wild”, or should we only be relying on theoretical assumptions from experimental studies that can be more controlled?

3 Responses to “A balancing act in digital health”

  • 1
    Sarah Warren wrote on 1 August 2017:

    Q.1 If users were informed that they would be helping science and others by answering the questions might that be less off-putting?
    Q.2 Would you / should you not test this to find the answer?

  • 2
    Claire Garnett wrote on 1 August 2017:

    Q1. I think that informing users that they’re helping science can help, though it doesn’t appear to be sufficient to retain a high proportion of users throughout a lengthy registration process. We did this in Drink Less, and kept registration to a minimal, though still had a substantial number of dropouts throughout registration.
    Q2. Agreed, it would be great to test this question, though very tricky to do!

  • 3
    Nick wrote on 15 September 2017:

    I wonder if simplifying the questions down to simple poll based questions (thumbs up, thumbs down, yes/no) and using push notifications to split the question delivery into a number of single Qs rather than a large set (more than 3) is effective in increasing response rates. It is becoming easier and easier to create more ‘out of app’ data collections moments within apps that are less intrusive but still potentially useful avenues of engagement.

Leave a Reply