X Close

CBC Digi-Hub Blog

Home

Menu

Accelerate your Behavioral Medicine Research Using Machine Learning

By Emma Norris, on 11 April 2019

By April Idalski Carcone, PhD – Wayne State University

Chances are you’ve probably heard the term “machine learning”, but what exactly is it? And, why should behavioral scientists care about it? A recent article in Medium, an online publishing platform, described how computer scientists are leveraging behavioral science to increase website traffic and app usage, explaining behavioral science as “an interdisciplinary approach to understanding human behavior via anthropology, sociology and psychology”. It’s time for behavioral scientists to integrate computer science, specifically machine learning, into our interdisciplinary mosaic. Here’s why.

Simply put, machine learning is a class of statistical analysis techniques in which a computer “learns” to recognize patterns in data without being explicitly programmed to do so. A familiar example of machine learning at work are those advertisements that populate your favorite social media site or web browser searches. Have you ever thought “Wait a minute… how does X know I shop at Y?!” Well, you were just browsing Y’s website allowing the algorithms programmed into X’s website to learn that you like to shop at Y. Using this knowledge, X’s algorithms will now target Y’s advertising to you. While in the context of internet browsing, the use of machine learning models may be helpful, annoying, or even disturbing, these models have powerful implications for behavioral science. For example, at the 2019 SBM Annual Meeting, behavioral scientists described how they used machine learning to interpret data from wearable sensors, such as quantifying screen time based on color detection or identifying eating patterns among participants with obesity. Others used machine learning models to predict eating disorder treatment outcomes from medical chart data and examine the relationship between affect and activity levels.

There are two broad classes of machine learning models – supervised and unsupervised. Supervised machine learning models use coded data (“training” data, in computer science speak) to learn a mathematical model to map inputs (raw data) to the result of a particular cognitive task, which is represented as a code or “label”. Through this process, computers mimic the decision-making process human coders use to perform the same task, with accuracy comparable to human coders but at a fraction of the time required. However, this means that a supervised machine learning model is only as good as the data you use to train it.

To illustrate, my colleague, Alexander Kotov PhD and I have developed a series of supervised machine learning models to assign behavioral codes to clinical transcripts of patient-provider communication. Human coders had previously coded transcripts of adolescent patients and their caregivers engaged in a weight loss intervention using a code scheme that operationalizes patient-provider clinical communication according to the Motivational Interviewing framework. We used this coded data to train a machine learning model to recognize and assign these behavioral codes. The inter-rater reliability between the man and the machine was k=.663, compared to k=.696 between human coders. We then tested the model in a novel clinical context, HIV clinical care. With no modification, our model accurately identified 70% of patient-provider behaviors (k = .663) compared to human coders. We are currently working to develop additional machine learning models to fully automate the behavioral coding process, including models to parse transcripts into codable segments and analyze the sequencing of communication behavior. This is just one example of how behavioral scientists have leveraged supervised machine learning models. Others have used this approach to analyze clinical transcripts in the assessment of treatment fidelity, biomedical data to enhance fMRI diagnostic prediction, and to predict mental health outcomes (e.g., suicide) from medical chart data and developmental outcomes (e.g., developmental language disorders) from screening instrument data.

Unsupervised machine learning models contrast with supervised models in that they use mathematical models to discover patterns in uncoded data. The ability to use uncoded data is a real strength of unsupervised models because of the resource investment required to construct a training coded data set. However, with no training data to compare model performance to, a big drawback of unsupervised machine learning is the inability to assess the accuracy of the model. Rather, computer scientists rely on content area experts, such as behavioral scientists, to guide decisions regarding model accuracy. Some examples of unsupervised machine learning applications include identifying people at high risk for developing dementia from population-based survey data, patients at high risk for death due to chronic heart failure, and clinical subtypes of Chronic Obstructive Pulmonary Disease from electronic medical records.

The primary strengths of machine learning models are their ability to process large amounts of data very efficiently and their ability to detect very complex patterns in data. Machine learning analytics far outstrip the abilities of even the best-trained team of human analysts giving them the potential to accelerate behavioral research to unprecedented rates. On the other hand, because models are developed from the data, an inherent weakness of machine learning models is the integrity of the data set used – change the data, change the model. Thus, the best models are built on large, high quality data sets, which can be difficult and resource intensive to construct. For example, the initial training data file in the Motivational Interviewing example above was composed of only 38 interactions and required nearly a year to code, after the year needed to develop the coding scheme and train the coders! Model interpretability can also be an issue. The logic underlying some models’ decision-making may not be easily traceable or follow a linear path, particularly in more complex unsupervised models which many have thousands, if not more, rules. This complexity makes it difficult to assess the accuracy of some machine learning models.

The examples mentioned here are just the tip of the iceberg. There are many applications of machine learning models with vast implications for both research and clinical care. The Human Behaviour-Change Project is using machine learning to cull the published literature to inform behavior change intervention and theory. The Behavioral Research in Technology and Engineering (BRiTE) Center is home to several technology driven projects to improve mental health services. NIH’s Big Data to Knowledge (BD2K) program encourages researchers to engage in data-driven discovery using technology-based tools and strategies, like machine learning, to analyze the ever-growing corpus of complex biological data. These initiatives are the foundation of precision medicine.

So, what are you waiting for? Go find a computer scientist and start harnessing the power of the machine! Still unsure about how machine learning models might augment your work? In addition to the articles referenced here, SAS has a free downloadable white paper that is a great primer on machine learning and its many applications. It’s a quick read making it a great way to get started thinking about how machine learning might inform your work. 

A version of this article originally appeared in the Spring 2019 issue of Outlook, the newsletter of the Society of Behavioral Medicine.

Bio 

Dr April Idalski Carcone is Assistant Professor in the Behavioral Sciences Division of the Department of Family Medicine and Public Health Sciences at Wayne State University. Dr. Carcone is a social worker by training with over 14 years of experience in behavioral health research.

 

 

 

 

 

Brief reflections from the Human Behaviour-Change Project – Emma Norris, UCL

I really enjoyed reading this blog – a great overview of the growing examples of AI impact in the field of behaviour change. As a researcher on the behavioural science team of the Human Behaviour-Change Project referenced in the article, we are working on a collaboration with computer scientists at IBM Research Dublin to use machine learning to interpret and synthesise the published literature on behaviour change interventions. Overviews of the aims and methods of the project have recently been summarised in a protocol paper and a piece in The Conversation.

We are currently taking a supervised learning approach, training the developing system to identify key entities of interest in published behaviour change intervention papers. Behavioural scientists on the project are providing training sets of the key entities needed to understand the effects of behaviour change interventions within published papers, such as the Behaviour Change Techniques (BCTs) used within the intervention or the context (e.g Population and Setting) in which the intervention was carried out. The entities captured are being extracted according to a developing ontology of key entities within the field of behaviour change. By building the training set of human-annotated papers, we are working towards developing unsupervised models for data extraction and analysis in the future. A particular part of the blog of interest to me was discussion of the accuracy of human-annotated data: “supervised machine learning model is only as good as the data you use to train it”. This potential bias is an issue we have been conscious of from the outset of the project. To train the system, our process involves us having two human coders annotate information from papers, before reconciling any differences to form one agreed-upon set of entities. Pairs of coders are frequently changed to prevent biases forming in coding pairs. However, it is possible that overall lab biases may develop in the strategies used to annotate papers. For example, it may be the case that our team may identify different pieces of text as representing the BCT of ‘goal-setting’ compared to another lab. In the case of BCTs, this process is relatively minimised, as extensive training on coding BCTs already exists. However, other developing aspects of the ontology such as Population are being annotated to rules developed in-house, as the results of an iterative development process. It may be the case that another lab may have developed the annotation manual and guidance differently.

We also regularly request input from external international experts in behavioural science and public health to inform the development of the ontology that we are coding papers with. This advises us on the relevance and understanding of our ontology and annotations to wider international and multidisciplinary contexts. However, it may indeed be the case that we are missing potential feedback that would further help us refine the data training the system.

The practical issues briefly raised in the original blog are part of important continuing discussions in behavioural applications of machine learning and beyond. We look forward to continuing to contribute to these discussions as we continue our experiences within the project!

Bio

Dr Emma Norris (@EJ_Norris) is a Research Fellow on the Human Behaviour-Change Project at UCL’s Centre for Behaviour Change. Her research interests include the synthesis of health behaviour change research and development and evaluation of physical activity interventions.

 

 

 

Leave a Reply