X Close

CBC Digi-Hub Blog

Home

Menu

Glimpse into the future – How will the field of digital health evolve in the next 5 years and what are the implications for behavior science? An industry perspective

By Emma Norris, on 30 April 2019

By Madalina Sucala PhD & Nnamdi Ezeanochie MD, DrPH – Johnson & Johnson

The Society of Behavioral Medicine (SBM) is a leading forum for behavior scientists and connected disciplines to study and promote behavioral health. Within the organization, the Digital Health Council is responsible for leading initiatives to inform and prepare society members, and behavior scientists overall, for the evolution of this field. As a member of the Digital Health Council, Dr. Madalina Sucala led an initiative interviewing experts in the field for their perspective on the evolution of digital health and the expected implications for behavior science. In the interview below, Dr. Sucala interviewed her colleague, Dr. Nnamdi Ezeanochie, who provided an industry perspective on the dynamic and rapidly evolving field of digital health.

 

MS: When thinking about the next 5 years, what do you think will be most important for digital health? [this can be broad, as it applies to technology, legislation, etc.]. Why?

NE: In the next 5 years, it will be imperative to design digital health technologies that work for all. This not only means ensuring equitable implementation of digital health solutions across different populations, but also leveraging real-world data from diverse communities to inform and shape how solutions are built and optimized over time. This challenge means that digital solutions will need to become more personalized and optimized, while at the same time, ubiquitous with the deliberate intention of narrowing the “digital divide” and reducing health disparities.

To personalize and optimize solutions within and across different populations, two components are needed:

1. An insights generation loop that rests on: i) behavior science evidence, ii) robust data capture within digital solutions, and iii) advanced analytics via data science methodologies. These three components will ensure that insights on what works for different populations (sub-groups); and how different populations may respond or perform (predictive), are generated.

2. A platform that houses these insights and can mechanistically deploy them in relevant contexts, overtime, within and across different populations.

The presence of both capabilities (insights and platforms) will ensure that digital technologies in the next 5 years can truly be adaptive and effective for many populations globally. To facilitate the use of these capabilities on a larger scale, there needs to be sustained commitment in both public and private sectors—worldwide, to invest in technology/digital infrastructures such as expanded mobile telecommunications, satellites, network infrastructure, data centers, etc. This is critical to ensure that no population of people are left behind; and that access to the promised benefits of digital health solutions is achievable.

 

MS: When thinking about the next 5 years, which technology capabilities do you think will bring more promise for digital health? Why?

NE: The first category would be what I refer to as the “front-end layer,” which includes user – facing technologies. In this category, we can envision improved and highly convenient interfaces and product designs which will remove usage barriers and increase user engagement with technology. We can expect to see a trend towards technology that can be easily commodified, integrated into daily routines and used for increased user convenience. Voice technology, chatbots, and miniaturized devices with esthetic appeal for health behavior tracking will become more pervasively used as well.

The second category would be what I refer to as the “middle layer.” With so many health behavior tracking possibilities (e.g. health behavior passively and actively tracked alongside data from smart homes, social media, transportation, personal wearables, to care facilities, etc.), data integration and streamlining capabilities will become extremely relevant. For example, beyond the slick designs and attractive interfaces, digital health interventions need to consider application programming interfaces (API), which enable connectivity between various devices and programs, and facilitate the interaction between data, applications and devices.

Finally, the third category would be the “back-end layer.” Digital health interventions will need to be built with a configurable system architecture – one that allows artificial intelligence capabilities for intervention personalization and optimization. This will allow for the configuration of intervention components based on advanced analytic insights for improved health outcomes. In addition to the system architecture, advanced data science methods and computational capabilities are needed to support personalization and optimization.

 

MS: How do you see the digital field evolving in the next 5 years?

NE: Evolution of the digital field in the next 5 years will be very exciting, and to some extent, unpredictable. This evolution will be expected to occur at two levels: 1) the intrinsic evolution of digital solutions themselves, and 2) the extrinsic, or environmental, evolution within which digital solutions will exist, apply and interact with.

Digital health solutions will become more responsive and adaptive, thereby delivering more personalized experiences to users, and constantly updating features to ensure sustained optimization and effectives. Big drivers of this intrinsic evolution include: better passive data tracking devices and sensor capability, emergency of connected digital tools that seamlessly aggregate data from multiple sources, and new analytic tools and software that can develop algorithms to drive and automate how digital solutions respond in real-time and in different contexts. These features mean that we will no longer be required to manually or actively monitor our behavior, track our daily intakes, or complete a survey repeatedly. Information will be aggregated and synced across platforms and used to make more responsive decisions about things that affect people daily. Therefore, we can expect to have smart technology that actively influences our lives without our full participation.

However, technology will not exist in a vacuum and will be shaped by several extrinsic factors. These include: an expected rise in people’s data literacy; participation and agency in controlling their data – and decisions on its subsequent use by 3rd parties; the creation of new legislation responsive to emerging capabilities of digital technology solutions. The introduction of the General Data Protection Regulation (GDPR) requirements, news of technology companies ordered to pay huge fines in Europe, and FDA involvement with digital health solutions in the United States are examples of current extrinsic factors influencing technology. These are expected to shift further in the next 5 years. So, perhaps the most important takeaway, is that a 5-year evolution of the digital health field is largely dependent on how impactful these extrinsic factors will be in time, and thus, this evolution remains unpredictable.

 

MS: What do you think behavioral scientists can do to be more prepared to work with digital health in the next 5 years? [this can be broad, as it applies to training, partnerships, etc.]

NE: The interdisciplinary nature of digital health will require behavior scientists to become familiar (at least at a level that allows collaboration) with the following:

  • Intervention co-design processes. While the scientific requirements for an intervention will still be the purview of science, learning how to offer them in an actionable way that lends itself to forming the foundation for the co-design process will be key.
  • Intervention development. Although it might not be necessary for behavior scientists to learn how to develop software themselves, it is important for them to become familiar with current approaches in general technology or software development. For example, technology companies often adhere to what is called an Agile approach. This is an approach initiated to challenge the traditional “waterfall” software development model, wherein entire projects are pre-planned and subsequently fully built before being tested with users. In contrast, the Agile approach emphasizes an iterative flexibility, proposing testing early and often across development. Familiarity with the development team’s structure and roles, as well as a good understanding of the phases in which to offer input is needed (e.g. Sprint review meetings in which they, as stakeholders, would review and provide input on what the development team has accomplished; Demo testing to ensure adherence to the scientific requirements of the intervention).
  • Advanced analytics. Digital interventions will increasingly rely on modern data science methodologies. Machine learning, data mining and other modern analytic methods are needed to capitalize on intensive longitudinal data to identify factors that would inform intervention optimization. It would be important for behavior scientists to become familiar with such data science methods.

 

MS: What is the role that behavioral scientists will play in digital health in the next 5 years?

NE: The role that behavior scientists can play ranges from offering subject matter expertise on content development, intervention design, and research design, to providing the necessary interpretation of evidence towards actionable insights for improved interventions. In all these roles, it is important to consider that developing effective digital health interventions will increasingly depend on strong interdisciplinary partnerships among behavior scientists, designers, software developers, system engineers and data scientists, as no single group has enough expertise and resources to develop successful, effective digital health technologies on their own. Previously, in non-digital interventions, behavior scientists were able to work in a rhythm dictated by government funding cycles, relying on their own and their peers’ expertise. Currently, in technology-based interventions, interdisciplinary collaborations are necessary, and the rhythm might be dictated by the technology’s evolution as well. Behavior scientists will likely find themselves working on collaboration platforms, whether working in academia, or in the private or public sector.

 

MS: How do you see technology supporting behavioral health in the next 5 years?

NE: Technology has an important role to play in supporting behavioral health in the next 5 years. This role includes several opportunities:

First, the behavior science and health fields can leverage engineered platforms to deliver real-time behavior change interventions. This function will help make digital health interventions more functional and applicable to real-world settings where factors that influence decisions and behaviors are different across and within populations.

Second, the behavior science field can deploy data analytic methods (e.g. machine learning, and deep learning) to inform insight generation that will help to improve existing behavior science theories and models. This will help ensure that the behavior science field continues to expand its theories to be more inclusive of different populations and contexts.

Third, technology can help support hybrid behavior science interventions. This means that behavior scientists can leverage several digital modalities and/or in-person experiences to deliver an intervention. This will help to reduce barriers-to-entry and access to such interventions – and ultimately, help ensure that no one is left behind.

Finally, behavior scientists can use predictive models to inform and support preventive interventions. For example, a predictive model that identifies at-risk groups for a vaccine-preventable infectious disease, can allow health promotion experts to channel resources to that specific group and build preventive programs focused on them, rather than the general population. The cost-benefit of such data guided interventions cannot be over-emphasized.

You can read a recent Digi-Hub blog giving a response from academics to these questions here.

 

Questions:

  • Based on your area of expertise, how would you answer these questions?
  • What could you do to be more prepared for the way the behavior science field might evolve?

 

Bios

Madalina Sucala is a Senior Manager of Behavior Science at Johnson & Johnson, where she leads the development, evaluation and implementation of digital behavior change interventions. She is a member of the Digital Health Council for the Society of Behavior Medicine. Madalina has a PhD in Clinical Psychology and has completed a Postdoctoral Fellowship in Cancer Prevention and Control. Prior to joining Johnson & Johnson, Madalina was an Associate Scientist at Icahn School of Medicine at Mount Sinai, where in her dual role as a scientist and a practitioner, developed, investigated, and delivered behavioral medicine interventions. She joined Johnson & Johnson in 2017 to pursue her passion about applying data-driven behavioral science and innovative technology to improve health and wellbeing outcomes. @MadalinaSucala
https://www.jnj.com/jjhws/behavior-science

Nnamdi Ezeanochie is a Senior Manager on the Behavior Science and Analytics team. He has extensive professional experience in technology-based health care and behavioral science implementation and research, with a unique focus on developing-country settings. His research expertise focuses on mobile technology adoption and implementation, health care program management, community and health behavior services, IT healthcare solutions, and disease outbreak management.@NnamdiEzeanoch1
https://www.jnj.com/jjhws/behavior-science

Accelerate your Behavioral Medicine Research Using Machine Learning

By Emma Norris, on 11 April 2019

By April Idalski Carcone, PhD – Wayne State University

Chances are you’ve probably heard the term “machine learning”, but what exactly is it? And, why should behavioral scientists care about it? A recent article in Medium, an online publishing platform, described how computer scientists are leveraging behavioral science to increase website traffic and app usage, explaining behavioral science as “an interdisciplinary approach to understanding human behavior via anthropology, sociology and psychology”. It’s time for behavioral scientists to integrate computer science, specifically machine learning, into our interdisciplinary mosaic. Here’s why.

Simply put, machine learning is a class of statistical analysis techniques in which a computer “learns” to recognize patterns in data without being explicitly programmed to do so. A familiar example of machine learning at work are those advertisements that populate your favorite social media site or web browser searches. Have you ever thought “Wait a minute… how does X know I shop at Y?!” Well, you were just browsing Y’s website allowing the algorithms programmed into X’s website to learn that you like to shop at Y. Using this knowledge, X’s algorithms will now target Y’s advertising to you. While in the context of internet browsing, the use of machine learning models may be helpful, annoying, or even disturbing, these models have powerful implications for behavioral science. For example, at the 2019 SBM Annual Meeting, behavioral scientists described how they used machine learning to interpret data from wearable sensors, such as quantifying screen time based on color detection or identifying eating patterns among participants with obesity. Others used machine learning models to predict eating disorder treatment outcomes from medical chart data and examine the relationship between affect and activity levels.

There are two broad classes of machine learning models – supervised and unsupervised. Supervised machine learning models use coded data (“training” data, in computer science speak) to learn a mathematical model to map inputs (raw data) to the result of a particular cognitive task, which is represented as a code or “label”. Through this process, computers mimic the decision-making process human coders use to perform the same task, with accuracy comparable to human coders but at a fraction of the time required. However, this means that a supervised machine learning model is only as good as the data you use to train it.

To illustrate, my colleague, Alexander Kotov PhD and I have developed a series of supervised machine learning models to assign behavioral codes to clinical transcripts of patient-provider communication. Human coders had previously coded transcripts of adolescent patients and their caregivers engaged in a weight loss intervention using a code scheme that operationalizes patient-provider clinical communication according to the Motivational Interviewing framework. We used this coded data to train a machine learning model to recognize and assign these behavioral codes. The inter-rater reliability between the man and the machine was k=.663, compared to k=.696 between human coders. We then tested the model in a novel clinical context, HIV clinical care. With no modification, our model accurately identified 70% of patient-provider behaviors (k = .663) compared to human coders. We are currently working to develop additional machine learning models to fully automate the behavioral coding process, including models to parse transcripts into codable segments and analyze the sequencing of communication behavior. This is just one example of how behavioral scientists have leveraged supervised machine learning models. Others have used this approach to analyze clinical transcripts in the assessment of treatment fidelity, biomedical data to enhance fMRI diagnostic prediction, and to predict mental health outcomes (e.g., suicide) from medical chart data and developmental outcomes (e.g., developmental language disorders) from screening instrument data.

Unsupervised machine learning models contrast with supervised models in that they use mathematical models to discover patterns in uncoded data. The ability to use uncoded data is a real strength of unsupervised models because of the resource investment required to construct a training coded data set. However, with no training data to compare model performance to, a big drawback of unsupervised machine learning is the inability to assess the accuracy of the model. Rather, computer scientists rely on content area experts, such as behavioral scientists, to guide decisions regarding model accuracy. Some examples of unsupervised machine learning applications include identifying people at high risk for developing dementia from population-based survey data, patients at high risk for death due to chronic heart failure, and clinical subtypes of Chronic Obstructive Pulmonary Disease from electronic medical records.

The primary strengths of machine learning models are their ability to process large amounts of data very efficiently and their ability to detect very complex patterns in data. Machine learning analytics far outstrip the abilities of even the best-trained team of human analysts giving them the potential to accelerate behavioral research to unprecedented rates. On the other hand, because models are developed from the data, an inherent weakness of machine learning models is the integrity of the data set used – change the data, change the model. Thus, the best models are built on large, high quality data sets, which can be difficult and resource intensive to construct. For example, the initial training data file in the Motivational Interviewing example above was composed of only 38 interactions and required nearly a year to code, after the year needed to develop the coding scheme and train the coders! Model interpretability can also be an issue. The logic underlying some models’ decision-making may not be easily traceable or follow a linear path, particularly in more complex unsupervised models which many have thousands, if not more, rules. This complexity makes it difficult to assess the accuracy of some machine learning models.

The examples mentioned here are just the tip of the iceberg. There are many applications of machine learning models with vast implications for both research and clinical care. The Human Behaviour-Change Project is using machine learning to cull the published literature to inform behavior change intervention and theory. The Behavioral Research in Technology and Engineering (BRiTE) Center is home to several technology driven projects to improve mental health services. NIH’s Big Data to Knowledge (BD2K) program encourages researchers to engage in data-driven discovery using technology-based tools and strategies, like machine learning, to analyze the ever-growing corpus of complex biological data. These initiatives are the foundation of precision medicine.

So, what are you waiting for? Go find a computer scientist and start harnessing the power of the machine! Still unsure about how machine learning models might augment your work? In addition to the articles referenced here, SAS has a free downloadable white paper that is a great primer on machine learning and its many applications. It’s a quick read making it a great way to get started thinking about how machine learning might inform your work. 

A version of this article originally appeared in the Spring 2019 issue of Outlook, the newsletter of the Society of Behavioral Medicine.

Bio 

Dr April Idalski Carcone is Assistant Professor in the Behavioral Sciences Division of the Department of Family Medicine and Public Health Sciences at Wayne State University. Dr. Carcone is a social worker by training with over 14 years of experience in behavioral health research.

 

 

 

 

 

Brief reflections from the Human Behaviour-Change Project – Emma Norris, UCL

I really enjoyed reading this blog – a great overview of the growing examples of AI impact in the field of behaviour change. As a researcher on the behavioural science team of the Human Behaviour-Change Project referenced in the article, we are working on a collaboration with computer scientists at IBM Research Dublin to use machine learning to interpret and synthesise the published literature on behaviour change interventions. Overviews of the aims and methods of the project have recently been summarised in a protocol paper and a piece in The Conversation.

We are currently taking a supervised learning approach, training the developing system to identify key entities of interest in published behaviour change intervention papers. Behavioural scientists on the project are providing training sets of the key entities needed to understand the effects of behaviour change interventions within published papers, such as the Behaviour Change Techniques (BCTs) used within the intervention or the context (e.g Population and Setting) in which the intervention was carried out. The entities captured are being extracted according to a developing ontology of key entities within the field of behaviour change. By building the training set of human-annotated papers, we are working towards developing unsupervised models for data extraction and analysis in the future. A particular part of the blog of interest to me was discussion of the accuracy of human-annotated data: “supervised machine learning model is only as good as the data you use to train it”. This potential bias is an issue we have been conscious of from the outset of the project. To train the system, our process involves us having two human coders annotate information from papers, before reconciling any differences to form one agreed-upon set of entities. Pairs of coders are frequently changed to prevent biases forming in coding pairs. However, it is possible that overall lab biases may develop in the strategies used to annotate papers. For example, it may be the case that our team may identify different pieces of text as representing the BCT of ‘goal-setting’ compared to another lab. In the case of BCTs, this process is relatively minimised, as extensive training on coding BCTs already exists. However, other developing aspects of the ontology such as Population are being annotated to rules developed in-house, as the results of an iterative development process. It may be the case that another lab may have developed the annotation manual and guidance differently.

We also regularly request input from external international experts in behavioural science and public health to inform the development of the ontology that we are coding papers with. This advises us on the relevance and understanding of our ontology and annotations to wider international and multidisciplinary contexts. However, it may indeed be the case that we are missing potential feedback that would further help us refine the data training the system.

The practical issues briefly raised in the original blog are part of important continuing discussions in behavioural applications of machine learning and beyond. We look forward to continuing to contribute to these discussions as we continue our experiences within the project!

Bio

Dr Emma Norris (@EJ_Norris) is a Research Fellow on the Human Behaviour-Change Project at UCL’s Centre for Behaviour Change. Her research interests include the synthesis of health behaviour change research and development and evaluation of physical activity interventions.

 

 

 

Digital Health at the 2019 Society of Behavioral Medicine Conference

By Emma Norris, on 21 March 2019

By Allison A. Lewinski – Durham Center of Innovation to Accelerate Discovery and Practice Transformation, Durham Veterans Affairs Medical Center in Durham, North Carolina, USA

The Society of Behavioral Medicine conference occurred from March 6th-9th, 2019 in Washington, DC. The theme of the conference was “Leading the Narrative.”

I attend the SBM annual conferences because my research focuses on eHealth interventions and chronic illness self-management and is at the intersection of precision medicine and population health. My interest is in promoting health equity by identifying individuals at high risk for adverse health outcomes due to social and environmental factors and then creating tailored population-level interventions to improve health outcomes.

A sign of a good conference is when you have trouble deciding which sessions to attend! This year, like the previous SBM conferences, I found it challenging to decide which sessions to attend because there were so many sessions relevant to my current research interests. One of my favorite parts about SBM is looking at the SBM Conference App and the SBM Pocket Program and denoting all the presentations and posters I want to visit! I met so many interesting people throughout the conference and had so many great discussions about chronic illness and digital health research. Below, I highlight my four take-aways since I am unable to write about all the interesting sessions related to digital health.

1. Digital health is here to stay. I was impressed at the vast amount of research being completed using various types of digital health in all types of settings. I think this is because digital health tools facilitate social interactions among patients, the patient’s family members and friends, and healthcare providers. Susannah Fox (@SusannahFox) gave the Opening Keynote on Thursday morning. Her talk, “Heath and Technology Megatrends: How You Can Anticipate the Future.” Her talk was engaging and informative and she discussed how people navigate healthcare and chronic illness using the Internet so as not to feel alone. These individuals search for and find, or create if necessary, communities in which to exchange disease-related support.

2. Digital health data provides useful insight into health behaviors. Digital health tools enable researchers to collect vast amounts of data, and researchers in academia and industry need to be purposeful about data collection and data analysis techniques. I am interested in unique data collection methods and either using, or developing new when necessary, analysis methods, so I attended several paper sessions that discussed using data collected from Twitter, YouTube, wearables or sensors, and other digital health tools. These sessions analyzed both qualitative and quantitative data collected, and I was struck by: how the researchers collected these data, what methods these researchers used to analyze these data, and how these data informed the research question. These specific talks highlighted how these data collected from digital health tools provides insight into the patient’s actions in the real world, specifically the barriers and facilitators to engaging in health promoting behaviors. Sessions that stuck out to me included:

·         Rebecca Bartlett Ellis, PhD, RN, ACNS-BC, Indiana School of Nursing (@DrBartlettEllis)—”Development of a Decision-Making Checklist Tool to Support Selecting Technology in Digital Health Research.” She stressed the importance of being aware of participant burden with healthcare devices—specifically, what are we asking participants to do with their health technology? What are the potential risks and benefits of using mobile apps and pervasive sensing devices in health research?

·         Camella Rising, PhD, RN, National Cancer Institute—“Characterizing Individuals by mHealth Pattern of Use: Results from the 2018 NCI Health Information National Trends Survey.” She examined the non-users of mHealth and defined five categories of use. She stated that these results can help describe a pattern of mHealth use which can help improve mHealth intervention design.

·         Philip Massey, PhD, MPH, Drexel Dornsife School of Public Health (@profmassey)—”Advancing Digital Evaluation Methods: Use of Publicly Available Online Data to Measure Impact of Global Health Communication.” He examined YouTube comments from episodes of a TV series. I found it fascinating how he identified a unit of analysis (e.g., an original post) and developed a codebook to examine this narrative data.

·         George Shaw, PhD, University of North Carolina at Charlotte—”Exploring Diabetes Topics Using Text-Mining Approaches: Supporting Quality Control of Health Information in Digital Spaces.” He examined data from Twitter. He described how he used the Twitter API to obtain data with hashtags and keyword searchers. He also described how he developed the unit of analysis (e.g., an original tweet) and then used two different text-mining techniques to analyze the data.

3. Health communication is essential for anyone in research. One unifying theme throughout the conference was promoting health communication to the public. I attended several sessions on how behavioral medicine researchers were working to increase the awareness and impact of evidence-based research in the larger community. As researchers in a variety of settings (e.g., academia, industry, government) we have the power to impact the narrative regarding health behaviors. We can help increase knowledge about chronic illness, correct mis-information, and connect patients with healthcare providers or peers by promoting evidence-based research. Dr. Sherry Pagoto, PhD, University of Connecticut (@DrSherryPagoto) at her Keynote, “Leading the Narrative: Bringing Behavioral Medicine to the Masses” discussed the importance of getting out there and promoting good science. She talked about how important it was to disseminate research in various areas such as Twitter, using Podcasts, and other forms. Essentially – be purposeful, proactive, and deliberate about disseminating your research findings!

4. Patients are a valuable resource and asset to the digital health community. The most impactful session I attended while at SBM occurred on Saturday morning. The Master Lecture Panel, “Beyond Tokenism: How Patients are Impacting Research, Advocacy, and Healthcare Delivery” included: Tamika Felder, Cervivor (@Tamikafelder); Cyrena Gawuga, PhD, Boston University (@ceginpvd); and Dana Lewis, BA, OpenAPS (@danamlewis); and was moderated by Emil Chiauzzi, PhD, PatientsLikeMe (@emilchiauzzi). These three individuals discussed how they became involved in research and advocacy as patients, and how they were impacting healthcare delivery for themselves and their communities. I was amazed at how they co-created communities using the Internet and how they used digital health tools to maintain these communities in order to share information and support. Each woman discussed how they, and their respective communities, used various digital health tools to identify information, exchange support, and disseminate meaningful research findings!

Overall impression. I enjoyed attending SBM due to the many conversations I had with other researchers who used digital health tools to improve health and wellness in individuals. I urge you to attend the next SBM conference in 2020! I leave you with some questions to ponder regarding the use of digital health tools and contact me with any questions or comments: @allisonlewinski!

 

Questions to ponder:

  • How can we ensure rigorous research is completed using digital health tools?
  • What role should scientists and researchers have in ensuring evidence-based information is in the media? How do we combat ‘fake news’?
  • Patients have powerful voices—how can we as scientists amplify these patient voices in all aspects of the research process?
  • How are you disseminating your research? Who are you talking to? Where is the information going? What is the language you are using?

Links to check out:

SBM conference attendees were encouraged to add their presentations and posters to the Center for Open Science meetings database. Check out the uploaded presentations and posters here: https://osf.io/view/SBM2019/

There were lots of live conference Tweeters at SBM. Check out the hashtag #sbm2019 to see what was discussed!

 

Bio:

Allison A. Lewinski, PhD, MPH (@allisonlewinski) is a Postdoctoral Fellow at the Durham Center of Innovation to Accelerate Discovery and Practice Transformation (@DurhamHSRD) at the Durham Veterans Affairs Medical Center in Durham, North Carolina. Funding information: Support for Dr. Lewinski was provided by the Department of Veterans Affairs Office of Academic Affiliations (TPH 21-000), and publication support was provided by the Durham VA Health Services Research Center of Innovation funding (CIN 13-410). The content is solely the responsibility of the author and does not necessarily reflect the position or policy of the U.S. Department of Veterans Affairs, or the U.S. government.

 

 

Mapping out technological designs employed in digital interventions to reduce sedentary behaviours

By Emma Norris, on 12 March 2019

By Yitong Huang & Holly Blake – University of Nottingham

Prolonged sedentary time is adversely and independently associated with health outcomes and risk of mortality, and as such is a rising public health concern. Many office-based occupations contribute to increased risk of prolonged sedentary behaviour. Digital technologies, such as computer software, web apps, wearables, and Internet of Things, with activity monitoring and feedback functionalities, are increasingly being deployed in the workplace, with the purpose to motivate sitting reduction and regular breaks. The past decades have seen an exponential growth of computing power at affordable prices. This has resulted in an increasing variety of digital gadgets (e.g. personal computer, tablets, smartphones, wearables, and Internet of Things) that a person is exposed to and interacts with on a day-to-day basis. Such a range of technology provides health intervention designers with a wider range of device choices that offer different form factors and features. However, it is still unclear what devices and digital features are suitable to be included in sedentary behaviour intervention targeting office workers?

Our recent study, “Digital Interventions to Reduce Sedentary Behaviours of Office Workers: Scoping Review”, could be particularly informative to those looking to locate the relevant design inputs.

Compared with previous reviews on sedentary behaviour interventions, our study has a focus on the technological design and includes evidence from the engineering and computer science arena as well as public health. We set out to achieve two aims. First, to map out the technological landscape and research activities conducted in different disciplines on this topic; and second, to determine research gaps in terms of utilizing and innovating technologies for workplace sedentary behaviour interventions.

A total of 68 articles describing 45 digital interventions were included in the study. We categorized the articles and interventions into development, feasibility/piloting, evaluation, and implementation phases based on the UK Medical Research Council (MRC) framework for developing and evaluating complex interventions; we also developed a novel framework to classify technological features and annotate technological configurations. The framework encompasses common technological features such information delivery, digital logs, passive data collection, automated tailored feedback, scheduled prompts, connected devices, and mediated organizational support and social influences.

Our study identified a research gap in the integration of passive data collection and connected devices with automated tailored feedback or scheduled prompts, as most of the published studies employing such configurations were still in the development or feasibility/piloting phase. For instance, validated passive data collection devices like the ActivPAL (PAL Technologies Ltd, Glasgow, United Kingdom) and ActiGraph (LLC, Pensacola, FL, USA) were widely used for outcome measurement in interventional studies, but less commonly used for intervention delivery. One explanation is that early models of the ActivPAL and ActiGraph devices were not equipped with any output module (e.g. a screen) to let wearers, or even researchers, receive feedback on sedentary behaviour during the monitoring period. Their stored data is also not accessible to third-party apps or devices in real-time for implementation of Just-In Time Adaptive Interventions (JITAI). This may, in turn, demotivate deployment of those devices beyond the assessment period (usually 1 week or 5 workdays). However, collection of data continuously throughout the whole study period can generate valuable insights into the process of change, as demonstrated in several studies. Hence, our findings highlight the importance of interdisciplinary and intersectoral collaborations to maximize the potential of technologies. For instance, the provision of Application Programming Interfaces (APIs) by manufactures to allow research-purposed apps or devices to stream the devices’ raw data in real-time or near real-time will accelerate development and innovation in this field.

Our findings also uncovered a lack of research on scheduled prompts beyond the feasibility/piloting phase. We suggest that research opportunities exist in exploiting novel digital interfaces with wireless connectivity for promoting and persuading office workers. Exciting development and pilot studies on tangible, embedded and ambient media are being conducted in engineering, computing, and design fields. However, innovations in these fields do not seem to effectively move to the next phase of evaluation with more rigorous study designs (more commonplace in public health and the behavioural sciences). It requires more thinking as to what kind of mechanisms can be helpful for feeding design-related findings into other fields with an interest in behavioural change, and for moving the novel technologies downstream to the evaluation and implementation phase. As a starting point, we suggest researchers from all disciplines familiarize themselves with the MRC framework, report and position their research in the big picture of developing and evaluating complex interventions.

You can read the full paper here.

 

Questions:

  1. What is the potential of novel digital media (e.g. wearables, Internet of Things, programmable physical artefacts) for delivering behaviour change intervention? Is it worth the efforts to moving them downstream to the evaluation and implementation phase?
  2. How can we better connect and empower two communities—[a] those with expertise in health behaviour change, intervention content development, and evaluation, and [b] those with enhanced technical capacity to design and develop technologies, and study end-user interactions with technologies?
  3. How does your field consider, practise and disseminate “design and development” research?

 

Bio:

Yitong Huang @EchoYitongHuang  previously graduated from UCL with an MSc in Social Cognition and is now a PhD student at the Horizon Centre for Doctoral Training, University of Nottingham. Her PhD is looking at the opportunities and challenges with using Internet of Things to encourage healthier office work behaviours. The project is co-funded by the EPSRC and Unilever UK. Yitong’s broader research interests include persuasive designs for various behaviour change contexts and designs that bring about positive changes in people’s lives. She has an interdisciplinary approach to research and innovation, by drawing on a combination of theory- and evidence-based intervention design frameworks and user-centred system design methods.

 

Dr Holly Blake @hollyblakenotts is Associate Professor of Behavioural Science at the University of Nottingham Faculty of Medicine and Health Sciences, and a member of the Centre for Healthcare Technologies (CHT). She is a Chartered Health Psychologist, Associate Fellow of the British Psychological Society (AFBPsS), and Senior Fellow of the Higher Education Academy (SFHEA). Her work has contributed to the development of health and wellbeing services for employees in the public, private and third sector, in the UK, Europe, Middle East, and Asia. More broadly, Holly has research interests in workforce issues, health services research, patient experiences and self-management strategies in people living with long-term conditions. Including the application of technology to improving healthcare, education, self-care practices and the patient experience.

 

Too much of a good thing? Personal accountability and commitment to health goals

By Emma Norris, on 5 March 2019

By Manu Savani – University College London, UK

Like me, you might be thinking about how to be healthier and happier in the year ahead. Health behaviours often involve a trade-off – we pay the price now for making the change, but the benefits may only be felt further down the road. For example, we take up a new diet or gym class; it feels like hard work now but gives us hope that we will fit back into the t-shirt and shorts in the summer. Every time we make decisions that affect our health goals, such as selecting from a menu or reaching for the running shoes, we have to choose between listening to our forward-looking ‘planner’ self and our myopic ‘doer’ self.  Faced with such a choice, we might be tempted to privilege current gains over future gains, a phenomenon familiar to us as ‘present bias’. So how do we stay on track with our goals?

Commitment devices might help – personal strategies that bind your future self to desired behaviours  – and are increasingly a feature of weight loss toolkits. Deposit contracts set aside a sum of money that will be lost unless a goal is achieved. The idea is to create a cost, felt by the present-biased ‘doer’ self, which aligns current actions with future goals. Another way to do this is by creating a reputational cost through a public pledge to achieve a goal.

In my new study, I set out to test the effect of a reputational commitment device on health goals, with a field experiment involving users of an online weight loss service.

Clients had access to calorie counting tools and paid around £5 monthly membership fees for the service. 118 participants randomly assigned to a reputational commitment intervention were invited to name a weight loss coach, a supportive friend or family member who was aware of the weight loss target and might be asked to verify progress after four weeks. Planner-doer theory implies that making the weight loss target known to others would increase accountability to that target by generating a psychological tax to reneging. Digital health scholars suggest human support can enhance the effectiveness of such online interventions. Participants offered the added reputational commitment were therefore expected to report higher weight loss than the comparison group (n=145), who continued with the normal service, paying their fee but with no extra commitment strategies.

Data on weight loss at twelve weeks showed that on average both experimental groups lost weight, with the reputational intervention group self-reporting 1.1% average weight loss compared to 2.2% in the comparison group.

The reputational commitment strategy did not work as expected, and my analysis explores possible reasons for this unexpected finding. Participants who named a coach may have experienced ‘commitment overload’, which might explain why those who complied with the reputational intervention (41% who named a coach) experienced 4.4 kg less weight loss than the comparison group. This explanation is speculative and puzzles remain – for example, it is not clear why the effects become more pronounced at twelve weeks, when the groups demonstrated fairly even weight loss progress at four weeks.  

What we can learn from this research is that reputational commitment devices can have a significant – but unpredictable – impact on health behaviours. For policy designers, it would be wrong to exclude reputational commitment strategies from the menu of weight loss aids. The intervention supported people differently, with some participants losing more than 10% of their initial weight. A quarter of participants who declined to name a coach could not think of someone suitable, suggesting there may be demand for support and accountability on a personal level.

These strategies need to be better understood in order to harness potential positive effects. Future work might explore how to identify the optimal level and type of commitment to motivate behaviour change, and how best to combine online and offline weight loss strategies. In the mean time, you could tell someone about your health goals for the year. But go easy on yourself – one commitment strategy at a time.

Read the paper: “The Effects of a Commitment Device on Health Outcomes: Reputational Commitment and Weight Loss in an Online Experiment”.

Questions

  • Have you tried any reputational commitment strategies? If yes, how did you find it? If no, why doesn’t it appeal?
  • Is ‘commitment overload’ or ‘commitment saturation’ a plausible explanation?
  • How might we identify in advance whether people are likely to benefit from additional layers of commitment?

Bio 

Dr Manu Savani is a Teaching Fellow in Public Policy at UCL Department of Political Science. Her doctoral research in behavioural public policy examined the impact of a variety of commitment devices in health behaviour change, using experimental and qualitative methods. Manu’s current research continues to ask how programmes and policy can be designed to take account of behavioural biases, with a focus on welfare policy and financial decision-making: https://www.ucl.ac.uk/political-science/people/teaching/manu-savani