Applying Analytics to Measure Effective Engagement with mHealth Apps
By Emma Norris, on 19 February 2019
By Quynh Pham – University of Toronto
As a researcher in the mHealth space, I often feel a nagging sense of duty to buy into the hype around apps and their promise to transform chronic health and care. A crucial detail that has always thwarted my conversion from sceptic to believer is the evidence: while some apps have demonstrated efficacy in definitive trials, others have performed poorly.
One good explanation for this fickle effect is that people do not engage with apps as intended. People are complex, their experiences of health and interactions with illness are complex, and their engagement with a technology to care for themselves is no different. In a quest to study these complex engagement patterns, researchers realized that when people use apps, they conveniently leave behind a rich engagement log data trail that can be mined for meaningful insights. They intuitively started applying analytics – defined as the use of data to generate new insights – to explore engagement with mHealth apps. This work sparked a host of new ways to measure analytic indicators of engagement, a term we use to mean proxy measures of engagement with an mHealth app based on objective usage that generates log data. However, it also introduced uncertainty due to researchers often taking a haphazard approach to indicator selection.
Our research group saw these inconsistencies as an opportunity to unite the field, and set out to consolidate how analytic indicators of engagement have previously been applied in mHealth research. We reviewed 41 studies published in the last 2 years that used log data analytics to evaluate engagement with an mHealth app for self-managing a chronic condition, and gained some valuable insights:
· The average mHealth evaluation included for review was a two-group pretest-posttest RCT of a hybrid-structured app for mental health self- management, had 103 participants, lasted 5 months, did not provide access to health care provider services, measured 3 analytic indicators of engagement, segmented users based on engagement data, applied engagement data for descriptive analyses, and did not report on attrition.
· Across the reviewed studies, engagement was measured using the following 7 analytic indicators: (1) the number of measures recorded, (2) the frequency of interactions logged, (3) the number of features accessed, (4) the number of logins or sessions logged, (5) the number of modules completed, (6) time spent engaging with the app, and (7) the number or content of pages accessed.
· Of the 41 studies included for review, 24 presented, described, or summarized the data generated from applying analytic indicators to measure engagement. The remaining 17 studies used or planned to use these data to infer a relationship between engagement patterns and intended outcomes.
Conducting this review allowed us to index all the analytic indicators of engagement being used in the mHealth field into 7 distinct domains. We found that researchers favored evaluating the number of measures recorded on an app as an analytic indicator of engagement, closely followed by the frequency of interactions logged. This finding didn’t surprise us because of how pervasive data collection functionality is in most apps. What did surprise us was that researchers were least likely to measure the number of pages accessed and time spent engaging with an app. These indicators have always been incredibly popular for measuring engagement with Web-based interventions. We believe they are falling out of favor because of the growing recognition that users engage differently with apps compared to websites. They perceive apps to be a short-term commitment and access app-based content sporadically for shorter periods of time. From this, we recommend that researchers should refrain from measuring and reporting these 2 analytic indicators of engagement unless they are expressly relevant to the app under study.
We did not find any significant differences between the number or type of analytic indicators used to measure engagement across chronic conditions. Researchers applied indicators that were relevant to the features and functionality of their app. For example, studies of apps for diabetes self-management often measured the number of blood glucose readings due to the popularity of this feature, but never measured the number of modules or lessons because these features were not offered to users. While some scholars have called for less variation in how engagement is quantitatively measured across studies, we recommend that researchers continue to apply context-specific analytic indicators, but report them more systematically so that they can be compared and contrasted across studies.
Although researchers measured, on average, 3 indicators in a single study, the majority reported findings descriptively and did not further investigate how engagement with an app contributed to its impact on health and wellbeing. This finding suggests that researchers are gaining nuanced insights into how users are engaging with their apps but are not conducting inferential analyses to characterize effective engagement for improved outcomes. To move the field forward, we make the following recommendation: researchers seeking to gain a preliminary understanding of how users are engaging with their app are encouraged to apply all relevant analytic indicators from those identified in our review. Once they generate analytic insights, they might consider segmenting users by engagement behaviors to interrogate the data and refine their engagement models. By conducting inferential subgroup analyses with engagement as a predictor of observed health outcomes, researchers might uncover potential patterns of effective engagement and inform an operationalization of intended use. In this way, measuring engagement can be positioned on a methodological continuum toward determining adherence. Figure 1 presents a process model of our recommendations.
Without objective knowledge of how users engage with an app to care for themselves, the mechanisms of action that underlie complex models of digitally mediated behavior change cannot be identified. We hope that our review of analytic indicators can serve as a resource to support researchers in their evaluative practice. Raising the standard of mHealth app evaluation through measuring analytic indicators of engagement might help to make a stronger case for their causal impact on improved chronic health and wellbeing. I believe it is this opportunity afforded by data-driven research to close the gap between promised and realized health benefits that is most meaningful.
You can read the full paper here.
1. Do researchers currently feel supported by the availability of resources to help them apply analytic tags to their apps and generate valid engagement log data?
2. What study designs are appropriate to determine the ‘digital dose’ of effective engagement with an app?
3. How might we incorporate analytic indicators of engagement to strengthen the post-market surveillance of mHealth apps?
Quynh Pham (@qthipie) is a PhD Candidate in the Institute of Health Policy, Management and Evaluation at the University of Toronto, under the supervision of Dr. Joseph Cafazzo. Her research interests are powered by the use of data and analytics to conduct innovative evaluations of consumer mHealth apps for chronic conditions. She gets pretty excited about the potential to track human behaviour, specifically the way we engage with technology and the digital footprints we leave behind, and to visualize patterns in these behaviours that might help us to lead happier and healthier lives.