X Close

CBC Digi-Hub Blog

Home

Menu

Understanding intervention effectiveness using novel techniques: Report from EHPS 2019 Symposium

By Emma Norris, on 9 October 2019

By Emma Norris, Gjalt-Jorn Y. Peters, Neža Javornik, Marta M. Marques, Keegan Knittle, Alexandra Dima

The European Health Psychology Society (EHPS) had its 2019 conference from 4-7th September in Dubrovnik, Croatia. The packed programme featured a wide range of research across health psychology, including digital interventions, theoretical and methodological advances, chronic illness, preventive health and much more.

This extended blog summarises a symposium in showcasing novel techniques and tools for intervention specification entitled ‘Understanding intervention effectiveness: analysing potential for change, improving intervention reporting, and using machine-readable decision justification’. The symposium aimed to address an urgent question in health psychology: How can we design more effective interventions? The 5 presentations presented practical tools to support researchers in our united mission to increase our understanding of intervention effectiveness and better support population health. We present a summary of each presentation, followed by some concluding thoughts and questions from the symposium’s discussant Dr Alexandra Dima.

You can find the slides for the symposium here: https://osf.io/hvkaz/

 

Potential for change (PΔ): New metrics for tailoring and predicting response to behaviour change interventions – Keegan Knittle, University of Helsinki

A novel integrative construct, potential for change (PΔ), accounts for ceiling/floor effects to predict an individual’s likelihood of response to an intervention. Using baseline data from a randomised controlled trial testing the ‘Let’s Move It’ physical activity promotion intervention, the team calculated determinant-level PΔ scores for 12 named theoretical determinants in the intervention. They then calculated ‘PΔ-global’, the mean of the 12 PΔ-determinant scores, weighted by each determinant’s association with MVPA at baseline. In this way, PΔ could also be seen as a measure of the extent to which a theory underlying an intervention matches with an individual intervention recipient. Among intervention recipients, PΔ-global followed a normal distribution and was significantly related to increases in accelerometer-measured MVPA (r=.269; p<.001) and self-reported days per week with at least 30 minutes of MVPA (r=.175; p=.001): the intervention’s primary outcomes. Hence using data from the Let’s Move It study, PΔ-global accounted for floor/ceiling effects and predicted response to a theory-based behaviour change intervention. Possible future uses of PΔ include applying it to time-series data of individual determinants as a means to tailor intervention delivery.

– – – – – –

Reporting the characteristics of treatment-as-usual in health behavioural trials – Neža Javornik, University of Aberdeen

Treatment-as-usual (TAU), a common comparator in health behavioural trials, allows us to establish how effective an intervention is against an existing standard treatment, provided in a certain setting. Such treatment, typically delivered in person, can vary in different characteristics (e.g. who provided the treatment, how, how long it lasted for, what it contained) and influence behaviour and health outcomes between control group participants, and thus trial effect sizes, differently (e.g. de Bruin et al., 2009, 2010). For the interpretation and comparison of trials it is important that readers and systematic reviewers have a clear understanding what TAU in a particular trial consisted of. This requires some standard format for TAU reporting that the present study attempted to identify.

A narrative review was first conducted to identify the potentially important TAU characteristics, which were mapped onto existing reporting frameworks (Intervention Mapping, TIDieR and BCT Taxonomy v1). The identified characteristics were used to inform a modified Delphi expert consensus study, which aimed to identify the necessary and recommended TAU characteristics, and how detailed their reporting should be. Five stakeholder groups (N = 25) participated in anonymous online voting and discussion. The critical TAU characteristics to report at a general level of detail were primary health behaviours, active content, tailoring of active content, duration characteristics (frequency, number and length of sessions), the profession of the provider, and any major deviations from the intended TAU characteristics. Setting characteristics (location, setting, mode, rural/urban characteristics) were thought to be critical to report at the level of every clinic. This allows for understanding how TAU should be reported when describing health behavioural trials. In turn, that can lead to better understanding, interpretation and comparison of trials with a TAU comparator.

Repository: https://osf.io/n7upg/  Link to talk: https://osf.io/8erbp/

– – – – – –

Acyclic Behaviour Change Diagrams: human- and machine readable reporting of what interventions target and how – Gjalt-Jorn Ygram Peters, Open University of the Netherlands

To progress behaviour change science, research syntheses are crucial. However, they are also costly, and unfortunately, often yield relatively weak conclusions because of poor reporting. Specifically, in the context of behaviour change, the structural and causal assumptions underlying behaviour change interventions are often poorly documented, and no convenient yet comprehensive format exists for reporting such assumptions.

In this talk, Acyclic Behaviour Change Diagrams (ABCDs) were introduced. ABCDs consist of two parts. First, there is a convention that allows specifying the assumptions most central to the dynamics of behaviour change in a uniform, machine-readable manner. Second, there is a freely available tool to convert such an ABCD specification into a human-readable visualisation (the diagram), included in the open source R package ‘behaviorchange’.

ABCD specifications are tables with seven columns, where each row represents one hypothesized causal chain. Each chain consists of a behaviour change principle (BCP), for example a BCT, that leverages one or more evolutionary learning processes; the corresponding conditions for effectiveness; the practical application implementing the BCP; the sub-determinant that is targeted, such as a belief; the higher-level determinant that belief is a part of; the sub-behaviour that is predicted by that determinant; and the ultimate target behaviour. The ABCD is illustrated using an evidence-based intervention to promote hearing protection.

ABCDs conveniently make important assumptions underlying behaviour change interventions clear to editors and reviewers, but also help to retain an overview during intervention development or analysis. Simultaneously, because ABCD specifications are machine-readable, they maximize research synthesis efficiency.

Repository: https://osf.io/4ta79  Link to talk: https://osf.io/utw95/

– – – – – –

Development of an ontology characterising the ‘source’ delivering behaviour change interventions – Emma Norris, University College London

Understanding who delivers interventions, or the ‘source’ of interventions, is an important consideration in understanding an intervention’s effectiveness. However this source is often poorly reported. In order to accumulate evidence across studies, it is important to use a comprehensive and consistent method for reporting intervention characteristics, including the intervention source. As part of the Human Behaviour-Change Project, this study used a structured method to develop an ontology specifying source characteristics, forming part of the Behaviour Change Intervention Ontology.

The current version of the Source Ontology has 196 entities covering Source’s occupational role, socio-demographics, expertise, relationship with individuals targeted by the intervention, and whether the source was paid to deliver the intervention. The Source Ontology captures key characteristics of those delivering behaviour change interventions. This is useful for replication, implementation and evidence synthesis and provides a framework for describing source when writing and reviewing evaluation reports.

Repository: https://osf.io/h4sdy/  Link to talk: https://osf.io/hg6nd/

– – – – – –

Enhancing research synthesis by documenting intervention development decisions: Examples from two behaviour change frameworks – Marta M. Marques, Trinity College Dublin & Gjalt-Jorn Y. Peters, Open University of the Netherlands

To support the development of behaviour change interventions, there is a considerable amount of guidance (e.g. Intervention Mapping) on how to select behaviours, identify behavioural determinants, and select methods/techniques. When it comes to decisions about which modes of delivery are best for certain methods, and how they should be designed, there is little guidance.

While researchers make these decisions during the development of interventions, these decisions are not well documented, and as such, opportunities to learn from the justifications of those decisions are lost. An easily usable, systematic, efficient and machine-readable approach to reporting decisions and justifications of such decisions would improve this situation and enable accumulation of knowledge that at present remains largely implicit.

We introduced ‘justifier’, an R package that allows reading and organizing fragments of text that encode such decisions and justifications. By adhering to a few simple guidelines, the meeting minutes and documentation of the intervention development process become machine-readable, enabling aggregation of the decisions and their evidence base over one or multiple intervention development processes. Tools such as heat-maps can be used to visualise these patterns, quickly making salient where decisions were based on higher and lower quality evidence.

We presented examples of key decisions to justify during intervention development, using common steps in the Intervention Mapping Protocol and the Behaviour Change Wheel. Using the proposed ‘justifier’ format document the decisions and their justifications in these key domains provides greater insight in the intervention development process.

Link to talk: https://osf.io/zmb3g/

– – – – – –

Discussion: Why and when would research teams use these tools? – Alexandra Dima, Health Services and Performance Research, University Claude Bernard Lyon 1, France

Participating in a symposium that introduces 5 new tools for intervention development and reporting (or reading a blog about such symposium) may generate mixed feelings of enthusiasm and anxiety. On one hand, as it has been previously stated at the EHPS conference, it is an exciting time to be a researcher. We have the possibility to do better work and communicate better about it, so that our individual contributions can add to the common edifice of evidence. On the other hand, ‘5 new tools’ also means ‘5 new ideas’ to get one’s head around, convince others of their importance for the project, and link to specific actions or habits to integrate in the workflow. Most likely, on top of 100 more ideas one needs to master during a usual research project – assuming there is time and funding to adopt them all and work towards best practices in research, and the resulting high-quality evidence. Getting to grips with these new tools requires effort, and we should not underestimate the cost involved. On the contrary, we need to plan for it. Yet, I would argue that these 5 tools, if used adequately and at the right moment in the research process, have the potential to save costs. They do this by making it easier to ask some good but uncomfortable questions which are unavoidable in any intervention development and reporting process.

First, asking the question ‘Is there any potential for change in the target behaviour and its determinants?’ is among those key uncomfortable questions at the beginning of an intervention study. In principle one would need at least an estimation of mean and standard deviation for a sample size calculation to move on. The PΔ scores ask this question also at the individual level and thus confront us with a key conceptual puzzle: if you know that participants vary in their potential to be influenced by the intervention, should you control for it to estimate the ‘real’ effect of the intervention? PΔ scores propose a way to quantify this variation – if baseline data indicate large variations, then tailoring needs to be considered.

Second, variation is not only present in the target group, but also in the services available to them at baseline, raising another uncomfortable question: “what is already there in terms of behaviour change activities, or ‘treatment-as-usual’?”. The TAU characteristics checklist is also best considered at the beginning of intervention development. If you are planning a multi-centric study, it is even more important to ask these questions early since this is in a way a PΔ for the participating centers. Considering this checklist brings with it numerous options to consider about the intervention itself. Would it be better to add an extra service or modify the work of existing providers? Would it be beneficial or even possible to improve standardization, granularity, and reporting in current practice? Finding a balance between benefit and feasibility starts with TAU characterization.

A third uncomfortable question is: ‘who is going to do it?’ The Source Ontology gives us the common dictionary (for humans and machines alike) to report this clearly, but it’s initial value is in my opinion also as a decision tool. Once you know what 196 types of sources are available you just can’t avoid this question. No, it is not obvious that the pharmacist, or the schoolteacher, will do it. In a time of evolution towards integrated care, this question will quickly take us to the revelation that most behaviour change processes are supported by several types of people and organization, and the effect of TAU or interventions is the result of all these.

Fourth, once you are well into intervention development and start getting answers to these initial questions, another, maybe even more uncomfortable question pops up: ‘what is the logic of what we are trying to do?’. At this moment you are safer from forgetting to ask basic questions, but the dangers of lack of coherence and making choices you will regret later on is still high. ABCD props you to do the right thing: have a visual check of consistency first for yourself and then to discuss with the research team and stakeholders and make sure everyone has a chance to input at this stage and pick up any awkward or missing links. ABCD will come in handy also when reporting and for systematic reviews, if research teams will use it consistently for similar studies. But there will be little to report if ABCD, and the intervention development process it supports, are not used routinely in the development phase.

And finally, an uncomfortable and apparently innocent question with profound consequences throughout the process is: ‘what decisions should we document and how?’ The other 4 tools point to various choices which, if not taken consciously, may impact unexpectedly on intervention effectiveness and evidence quality. Justifier proposes a way to make these choices and record the decisions in a way that can be accessed automatically and tested in systematic reviews. Imagining a world in which such tools become routine practice, in time and with adequate coordination between research teams we will be able to compare these choices in terms of intervention and evidence-related outcomes. But the more tangible benefit for the individual researcher is building good habits for writing meeting minutes that can be used for feedback on the development process.

Link to Discussion talk: https://osf.io/up4t9/

– – – – – –

If you are setting up an intervention project currently, this would be a good moment to ask yourself whether you could test these tools in your project, and where. The symposium presenters have all been there and are happy to advise. For the EHPS community, and other groups interested in health behaviour change, it is time to ask ourselves how to coordinate support for research teams worldwide to have easier access and training for implementing methodological innovations in their work and, most importantly, to test their effectiveness in real research practice.

You can find the slides for the symposium here: https://osf.io/hvkaz/

 

Bios

Emma Norris (@EJ_Norris) is a Research Fellow on the Human Behaviour-Change Project at UCL’s Centre for Behaviour Change.

 

 

 

 

Alexandra Dima (@a__dima) is a Senior Research Fellow in Health Sciences at Université Claude Bernard Lyon.

 

 

 

 

Neža Javornik (@NJavornik) is a PhD student in the Health Psychology Group at the University of Aberdeen.

 

 

 

 

 

Marta M. Marques (@marta_m_marques) is a Marie Skłodowska-Curie Fellow at Trinity College Dublin, and Honorary Research Fellow at the Centre for Behaviour Change, University College London

 

 

Gjalt-Jorn Y. Peters (@matherion) is an Assistant Professor in Methodology, Statistics and Health Psychology at the Open University of the Netherlands.

 

 

 

 

Keegan Knittle (@keeganknittle) is a University Researcher at the University of Helsinki focusing on understanding people’s motivations for behaviour.

 

 

Leave a Reply