X Close

‘Health Chatter’: Research Department of Behavioural Science and Health Blog

Home

Menu

Archive for the 'General' Category

Lessons from a (not so) rapid review

By Robert Kerrison, on 7 March 2019

Authors: Robert Kerrison, Christian von Wagner, Lesley McGregor

Introduction

Systematic reviews enable researchers to collect information from various studies, in order to create a consensus. One of the major limitations of systematic reviews, however, is that they generally take a long time to perform (~1-2 years; Higgins and Sally, 2011). Often, it is the case that an answer to a question is required quickly, or the resources for a full systematic review are not available. In such instances, researchers can perform what is known as a ‘rapid review’, which is a specific kind of review in which steps used in the systematic review process are simplified or omitted.

As of right now, there are no formal guidelines describing how to perform a rapid review. A number of methods have been suggested (Tricco et al., 2015), but none are recognised as being ‘best in practice’. In this blog, we describe our experience of conducting a rapid review, the obstacles encountered, and what we would do differently next time.

For context, our review was performed as part of a wider project funded by Yorkshire Cancer Research. The aim of the project was to develop and test interventions to promote flexible sigmoidoscopy (‘bowel scope’) screening use in Hull and East Riding. The review was intended to inform the development of the interventions by identifying possible reasons for low uptake.

Obstacles

Our first task was to select an approach from the plethora of options described in the extent literature. On the basis that many rapid reviews are criticised for not providing a rationale for terminating their search at a specific point (Featherstone et al., 2015), we opted to use a staged approach (previously described by Duffy and colleagues), which suggests researchers continue to expand their search until fewer than 1% of articles are eligible upon title and abstract review (the major assumption being that, if successive expansions yield diminishing numbers of potentially eligible publications, and the most recent expansion yields a relatively small addition to the pool, stopping the expansion at this point is unlikely to lead to a major loss of information).

After deciding an approach, our next task was to ‘iron out’ any kinks with the method selected. Several aspects of the review method were not fully detailed by Duffy and colleagues in their paper, and therefore needed to be addressed. Such aspects included: 1) how authors selected search terms for the initial search, 2) how authors selected the combination and order in which search terms were added to successive searches, 3) whether authors restricted search terms to titles and abstracts, 4) how many authors screened titles and abstracts and, 5) if two or more authors reviewed titles and abstracts, how disagreements between reviewers were resolved.

Through discussion, we agreed that: 1) the initial search should include key terms from the research question, 2) successive searches should include one additional term analogous to each of those included in the initial search (to ensure a large number of new papers was obtained), 3) the order and combination in which search terms should be added to successive searches should be based on the combination and order giving the greatest number of papers (i.e. to ensure that the search was not terminated prematurely), 4) search terms should be restricted to titles and abstracts, 5) titles and abstracts should be reviewed by at least two reviewers and, 6) disagreements between reviewers should be resolved through discussion between reviewers (see: Kerrison et al., 2019, for full details regarding the method used).

Experience

Having agreed an approach, and ironed out any issues with it, we were then faced with the task of performing the review itself. While this took less time to perform than a traditional systematic review, it was still a lengthy process (approx. 4 months). As per the systematic method, we were required to screen hundreds of titles and abstracts and extract data from many full-text articles. Perhaps the most time-consuming aspect of the entire review, was the process of manually entering the many different combinations of search terms to see which gave the largest number of papers for review at each stage. It is possible that, in the future, a computer programme could be developed to automate this process; however, this would only likely occur if the method was widely accepted by the research community.

After performing the review, we submitted the results for publication in peer-reviewed journals. Having never previously performed a rapid review, we were uncertain how it would be received. Disappointingly, our initial submission was rejected, but did receive some helpful comments from reviewers. While we were slightly discouraged, we decided to resubmit our article to Preventive Medicine, where it received positive reviews and, after major revisions, was accepted for publication.

Next time

So, what would we do differently next time? For a start, we’d consider using broader search terms. Our searches only detected 52% of papers prior to searching the reference lists of selected papers. We think that the main reason for this is that search terms were restricted to abstracts and titles, which often did not mention ‘flexible sigmoidoscopy’ (or variants thereof), specifically. Instead, most papers simply referred to the predictors of all colorectal cancer screening in the abstract (key words we had not included in our search terms in order to reduce the number of irrelevant papers reviewed), and then the predictors of each test in the main text. This problem is likely to repeat itself in other contexts (e.g. diagnostics and surveillance).

Another key change we would make would be to include qualitative studies and appropriate search terms to highlight these. Employing a mixed methods approach would help explain some of the associations observed, and thereby how best to develop interventions to address inequalities in uptake.

Final thoughts

Conducting a ‘rapid’ (4 months!) review has been an enjoyable experience. Like any research, it has, at times, been difficult. A lack of formal guidance, available for many forms of research today, made the process perhaps harder than it needed to be. With rapid reviews becoming increasingly common (read all about this here), it is our hope that this blog and paper will help make the process easier for others considering rapid reviews in the future.

Acknowledgements

This study was funded by Yorkshire Cancer Research (registered charity 516898; grant number: UCL407)

References

Duffy, S. W., et al. (2017). “Rapid review of evaluation of interventions to improve participation in cancer screening services.” Journal of medical screening 24(3): 127-145.

Featherstone RM, Dryden DM, Foisy M, et al. Advancing knowledge of rapid reviews: An analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Systematic Reviews. 2015; 4(1): 50.

Higgins JP, Sally. G. Cochrane handbook for systematic reviews of interventions, version 5.1.0. . 2011.

Kerrison, R. S., von Wagner C, Green T, Winfield M, Macleod U, Hughes M, Rees C, Duffy S, McGregor L (2019) Rapid review of factors associated with flexible sigmoidoscopy screening use. Preventive Medicine.

Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, Perrier L, Hutton B, Moher D, Straus SE (2015) A scoping review of rapid review methods. BMC medicine 13(1): 224

Can we help the public understand the concept of ‘overdiagnosis’ better by using a different term?

By rmjdapg, on 28 June 2018

Authors: Alex Ghanouni, Cristina Renzi & Jo Waller

We have previously written about ‘overdiagnosis’ – the diagnosis of an illness that would never have caused symptoms or death had it remained undetected – and how the majority of the public are unfamiliar with the concept and find it difficult to understand. We have also looked at the various ways that health websites describe it in the context of breast cancer screening; we previously found that most UK websites include some relevant information, in contrast to the last similar study from 10 years ago. This led us to think about how it might be possible to better explain the concept to people. Although ‘overdiagnosis’ is the most commonly used label, its meaning is probably difficult to infer if people are unfamiliar with it (and most people are). We wanted to test whether other terms might be seen as more intuitive labels that would help communicate the concept to the public.

We carried out a large survey in which we asked around 2,000 adult members of the public to read one of two summaries describing overdiagnosis. These summaries were based on information leaflets that the NHS has already used extensively in England. We asked people whether any of a series of possible alternative terms made sense to them as a label for the concept described and whether they had encountered any of the terms before.

What did we find?

A fairly large proportion of people (around 4 out of 10) did not think any of the seven terms we suggested were applicable labels for the concept as we described it. We also found that no single term stood out as being seen as particularly appropriate. The term most commonly endorsed (“unnecessary treatment”) was only rated as appropriate by around 4 out of 10 people. Another important finding was that around 6 out of 10 people had never encountered any of the terms we suggested and that the most commonly encountered term (“false positive test results”) was only familiar to around 3 out of 10 people. You can read the full paper here.

What were our conclusions?

We were disappointed that we did not find a term that was clearly considered to be an intuitive label for the concept of overdiagnosis. However, this was not entirely surprising because we know from several studies that it is unfamiliar to most people. It is not a given that this will always be the case: Organisations like the NHS and health charities are continually telling the public about overdiagnosis in various ways and if the concept becomes more familiar and better understood, people may be more inclined to identify a term that makes sense and which can then be used to communicate the concept. It is also possible that terms other than the 7 we looked at might already be suitable. Since the terms we looked at were generally unfamiliar, one recommendation we can make in the meantime is that it might be better to avoid specific labels like “overdiagnosis” when communicating the issue to people; explicit descriptions might be more helpful.

What does the UK public understand by the term ‘overdiagnosis’?

By rmjdapg, on 14 April 2016

Authors: Alex Ghanouni, Cristina Renzi & Jo Waller

In recent years, doctors and academics have become more and more interested in a problem referred to as ‘overdiagnosis’. There are several ways that overdiagnosis can be defined.

One particularly useful way is to think of it as the diagnosis of a disease that would never have caused a person symptoms or led to their death, whether or not it had been found through a medical test. In other words, even if a person had not had the test, the disease would never have caused them any harm.

Catching it early

It may not be obvious how this can happen. As an example, imagine a woman going for breast screening, which tries to find cancer at an early stage, before it starts causing symptoms.

The thinking behind this type of test is that if the disease is found early, it will be easier to treat and there is a higher chance of curing it. Most people are familiar with this idea that ‘catching it early’ is a good thing.

So, suppose a woman who has no symptoms goes for screening and the test finds cancer: she would usually go on to have treatment (e.g. surgery).

However, although she has no way of knowing for sure, it is possible that the cancer was growing so slowly that she would have lived into old age and died of something unrelated, without ever knowing about the cancer, had she not gone for screening.

The cancer is real but the diagnosis does not benefit the woman at all; it results in treatment that she did not need (‘overtreatment’). In fact, if she had not had the screening test, she would have avoided all the problems that come with a cancer diagnosis and treatment.

What research has found

If you find the idea of overdiagnosis counter-intuitive, you are not alone. Several studies have tried to gauge public opinion on the issue and found that this is a fairly typical view, partly because the notion that some illnesses (like cancer) might never cause symptoms or death is one that does not receive much attention and is often at odds with our personal experiences.

Results from an Australian study in 2015 found that awareness of ‘overdiagnosis’ is low – in a study of 500 adults who were asked what they thought it meant, only four out of ten people gave a description of the term that was considered approximately correct and these descriptions were often inaccurate to varying degrees.

For example, people often thought in terms of a ‘false positive’ diagnosis (diagnosing someone with one illness when really they do not have that illness at all), or giving a person ‘too many’ diagnoses.
Is this the same in the UK?

We wanted to find out whether this was also true in the UK. We asked a group of 390 adults whether they had come across the term ‘overdiagnosis’ before and asked them to describe what they thought it meant in their own words, as part of an online survey.

We found that only a minority (three out of ten people) had encountered the term and almost no-one (10 people out of all 390) described it in a way that we thought closely resembled the concept described above.

It was not always clear how best to summarize people’s descriptions but we found that people often stated that they had no knowledge or had similar conceptions to the Australian survey such as ‘false positives’ and ‘too many’ diagnoses.

Some descriptions were somewhat closer to the concept of overdiagnosis such as an ‘overly negative or complicated’ diagnosis (e.g. where the severity of an illness is overstated) but there were also some descriptions that we found more surprising such as being overly health-conscious (e.g. worrying too much about health issues).

Room for improvement

Many people who work in public health and healthcare believe that people should be aware of the possibility of overdiagnosis, particularly since they will eventually be offered screening tests in which there is this risk.

In this respect, our findings show that there is substantial room for improvement in how we inform the public about overdiagnosis. In part, this may be due to the term itself not having an intuitive meaning, in which case other terms might be more helpful (for example the term ‘unnecessary detection’).

This could be tested in future studies. Our findings also motivated us to find out the extent to which trusted information sources (such as websites run by the NHS and leading health charities) are already providing information on overdiagnosis.

We would like to share the findings from this study in a follow-up blog post. We will be posting this here soon.

This was originally posted on the BioMed Central blog network.

Exploring Twitter for Health Research

By Siu Hing Lo, on 22 May 2015

Twitter is probably one of the most obvious resources available for gauging public sentiment. It offers a rich, large-scale data source that can give insight into what people are thinking without having to interview or survey them. However, the use of Twitter data for research is relatively unexplored terrain.  So before conceiving of any “serious” research studies, my colleague Alex Ghanouni and I decided to explore Twitter as a data resource. In this piece, I would like to share our thoughts about our first informal attempt at venturing into the ‘Twittersphere’.

The starting point of our adventure was a curiosity about what is being said about cancer treatment and cancer prevention in social media. We adapted publically available Python code to track keywords in real time. The first iteration was for one hour only (12th March 2015); the subsequent two iterations were for 24 hours each (24th March; 5th May).

One seemingly easy question to address was the volume of tweets about “cancer treatment” and “cancer prevention” in relation to each other and “cancer” in general. We naively assumed that a count of tweets would be able to address our question. However, after the second iteration, it became apparent how naive our initial searches had been: Many of the tweets found using the keyword “cancer” turned out to be referring to the zodiac sign. As we could not think of a select group of second keywords that would be (almost) guaranteed to be used in conjunction with “cancer” the disease, we gave up on tracking “cancer” alone.

We had more luck with “cancer treatment”, “cancer prevention” and their relatively unambiguous synonyms and permutations. The volume of tweets for “cancer treatment” (24th March: 8355; 5th May: 5558) was consistently larger than that for “cancer prevention” (24th March: 5156; 5th May: 1487). This was even true around the 24th of March, the day when the news broke about Angelina Jolie’s preventative surgical removal of her ovaries and fallopian tubes. Although these findings do not reveal what is said about these topics, it should nevertheless give an indication of how much interest they generate. When it comes to cancer, it appears the public discourse mainly revolves around treatment rather than prevention. This is also in line with what we expected based on our professional and personal experience. Although our present investigation could not have been more rudimentary, more serious attempts at tracking specific keywords over longer periods of time might lead to genuinely novel insights.

Of course, we were also at least as interested in the content of the tweets about “cancer treatment” versus “cancer prevention”. To avoid a time-consuming traditional content analysis, we used the free web-based tool, ‘Wordle’, to create word clouds which reflect the frequency of words in text. Before creating the word clouds, we first removed all search terms from the tweet texts. When we examined the word clouds it became clear that there were two reasons why words were frequently used. Firstly, the words could be related to “real” news, which was the case for cancer prevention on the 24th March from 12pm GMT:

Cancer&Prevention_24.03.2015b

However, in two of the four word clouds we inspected, the most prominent words related to an obscure news source tweeting about dubious cancer cures (most likely for commercial reasons) or out-of-date research findings (a cervical screening paper from 1979).  Finally, it was hard to interpret the results of the fourth word cloud, as there were few words that really stood out.  A few of the largest words originated from a poem line (“That smile could end wars, and cure cancer”).

cancertreatment_05.05.2015_clean

As an academically-trained researcher, I felt compelled to do a quick – albeit not too rigorous – literature search for peer-reviewed publications as well. Both the PubMed and PsycINFO databases yielded around 750 hits containing the keyword “Twitter”. Compared with other one-word search terms, this is a modest number.  One review of published health studies using Twitter data concluded that most researchers lacked the knowledge and skills to process the large volumes of data and limited their samples in accordance with their ability to process and analyse the data (Finfgeld-Connett, 2014).  A second limitation they noted was the population-representativeness of Twitter users, or rather, the lack thereof.  Broadly speaking, we concurred with this review’s conclusions, although we would like to add a few nuances and additional observations.

Let’s start with looking at us, the researchers first.  Our first research experience with Twitter was in line with the challenges of using Big Data for health research that I discussed in a previous blog post.  Most of us who are interested in the content of social media tend to have a social science background.  Programming and data mining are therefore not part of our skillset acquired through formal education.  This obviously constrains what we can do with large volumes of data without help from those who are conventionally employed to work with Big Data.  Having said that, we felt that the lack of a reliable alternative to human judgement limited us more than our technical skills.  We repeatedly needed to resort to more simple forms of analysis (i.e. reading the tweets…) to determine what the data were actually telling us and there seemed to be no obvious way we could have outsourced this task to an algorithm.

Similarly, although there are probably sophisticated programmes to weed out bot-generated tweets, authenticity of the tweets might be a more general problem which cannot be easily addressed without human intervention.  The most obvious challenge is that tweets originate from a variety of users who have diverse professional, commercial and personal motives.  This is compounded by Fifgeld-Connett’s observation regarding the representativeness of Twitter users.

These challenges may not be insurmountable, but they do highlight that Twitter data is far from “clean” and straightforward to interpret for health research purposes.  I for one will be keeping a keen eye on future research endeavours tackling these issues.

 

References

Finfgeld-Connett, D. (2014), ‘Twitter and Health Science Research’, Western Journal of Nursing Research, 1-15.

UCL Qualitative Health Research Symposium: “Enriching qualitative inquiry in health”

By Charlotte Vrinten, on 4 February 2015

Interest in qualitative approaches to health research is growing.  This is a welcome endorsement of its contributions, but there may be tensions between the philosophies and practices of qualitative approaches, and those of a prevailing, quantitatively-oriented health research culture.

Following the success of a jointly organised 2013 symposium, qualitative research groups at UCL’s Health Behaviour Research Centre, Department of Applied Health Research (DAHR), and the Division of Psychiatry are organising their second symposium dedicated to qualitative health research: “Enriching Qualitative Inquiry in Health”.

This symposium will be held on Wednesday 18 February 2015 from 9:00-17:00 and will include a range of national and international speakers, as well as a key note address by Dr Sara Shaw (Queen Mary University of London).  Attendance is free, but registration is essential.

  • The full programme of the day can now be found on the HBRC website.
  • For more information about how to register, go to the DAHR website.
  • For background and context of the day, see the Division of Psychiatry blog.
  • Some examples of qualitative research conducted by the HBRC can be found here.

The promises and pitfalls of Big Data for personalised health care.

By Siu Hing Lo, on 12 December 2014

Big Data is burgeoning. A quick search using Google Trends shows that worldwide interest took off exponentially since 2011 and is still on the rise. There also seems to be consensus that Big Data has huge potential to improve health services (Accenture Industrial Internet Insights Report for 2015). Of course, the use of large health datasets has a long tradition in epidemiology and other public health disciplines. However, the sheer scale, variety and complexity of Big Data means we increasingly rely on artificial intelligence to manage and analyse data. Simultaneously, there is a trend towards tailoring and personalising health care services, often facilitated by the increasing availability of personal data and more powerful analytical tools.

Big Data certainly has the potential for gaining valuable insights. However, it could also be a double-edged sword, especially in the case of health care tailoring. Yes, Big Data could advance health risk ‘profiling’ and enable more cost-effective ways to tailor health services. But as Khoury and Ioannidis put it, the promise of Big Data also brings the risk of “Big Error”.

Possibly the most obvious caveat lies in the critical interpretation of data (Susan Etlinger at TED talks, 2014). Data does not speak for itself. People need to make the leap from data to insights. One major challenge is how to understand the data. With its ever growing complexity, both the data and the analysis are more likely to be biased in a way that the human interpreter had not foreseen. In other words, the unknown unknown. Another related issue is researcher bias in interpreting the results. What do numbers really tell us about other people’s needs, preferences and perceptions? As complexity increases, fewer people will be able to familiarise themselves sufficiently with the data and analytical methods used to critique study results.

A less obvious source of bias is the type of information sources that are relied on. Of course, Big Data is to be welcomed if it can yield useful insights. However, the promise of Big Data might overshadow the use of smaller scale (e.g. survey, qualitative interview, ethnographic) data and experimental studies. Research funding is finite and popular trends could unduly influence allocation of resources to studies proposing to use Big Data.

The question is whether Big Data can live up to its promise without skilful, complementary use of other methods of inquiry. Despite all the advances in speech recognition and robotic engineering, human beings are generally still much better at understanding other people. Especially in the case of health service tailoring and personalisation, it seems that caution is warranted. After all, if Amazon’s algorithm suggests a book you don’t like, little is at stake and you can ignore the recommendation. But if an algorithm suggests a bad care plan, it would be a less trivial problem, especially if you cannot ignore it. Multidisciplinary collaborations addressing the same public health inquiry would be important for the “Big Picture” as well as the avoidance of “Big Error”. Although multidisciplinary collaboration is not always easy, different viewpoints can facilitate critical and reflective thinking, key ingredients for any meaningful research and truly personalised health care.

To give a personal example, two of my earlier blog posts describe the behavioural insights I derived from studying records of 300,000 people invited for bowel cancer screening in the South of England. Perhaps reflecting wider trends, research psychologists like myself are using ever larger datasets to study people’s health behaviours and identify target groups for behaviour change interventions. However, one question we could not address in these studies was why people behaved the way they did, even if we could predict what they would do. Although these studies did not use Big Data, they illustrate the challenge we face using Big Data or any type of pre-existing data generated for purposes other than research. In order to improve public health, observing the status quo alone is not enough.

Big Data has the potential to yield powerful insights for health service tailoring and personalisation. However, the process of arriving at these insights may pose considerable challenges. Critical thinking and the involvement of researchers who do not typically work with Big Data will be key to its effective use as a tool for health care research.

References

Accenture (2014), Industrial Internet Insights Report for 2015, Available at http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-Industrial-Internet-Changing-Competitive-Landscape-Industries.pdf

Khoury, M.J. & J.P.A. Ioannidis (2014), Big data meets public health, Science, 346, 1054-1055.

Lo, S.H., Halloran, S., Snowball, J., Seaman, H., Wardle, J. & C. von Wagner (2014), Colorectal cancer screening uptake over three biennial invitation rounds in the English Bowel Cancer Screening Programme, Gut, Published Online First: 7th May 2014, doi:10.1136/gutjnl-2013-306144.

Lo, S.H., Halloran, S., Snowball, J., Seaman, H., Wardle, J. & C. von Wagner (2014), Predictors of repeat participation in the NHS Bowel Cancer Screening Programme, British Journal of Cancer, Published Online First: 27th November 2014, doi: 10.1038/bjc.2014.569.

Susan Etlinger (2014), What do we do with all this Big Data?, Filmed September 2014 at TED@IBM http://www.ted.com/talks/susan_etlinger_what_do_we_do_with_all_this_big_data/transcript?language=en

Busting the 21 days habit formation myth

By Ben D Gardner, on 29 June 2012

Have you ever made a New Year’s resolution? If so, you may have been assured – usually by a well-meaning supporter of your attempted transformation – that you only have to stick with your resolution for 21 days for it to become an ingrained habit. The magic number 21 creeps up in many articles about forming a new habit or making a change, but little is known about the origins of the ’21 days’ claim.

Psychologists from our department have devoted extensive time and effort to find out what it takes to form ‘habits’ (which psychologists define as learned actions that are triggered automatically when we encounter the situation in which we’ve repeatedly done those actions).

We know that habits are formed through a process called ‘context-dependent repetition’.  For example, imagine that, each time you get home each evening, you eat a snack. When you first eat the snack upon getting home, a mental link is formed between the context (getting home) and your response to that context (eating a snack). Each time you subsequently snack in response to getting home, this link strengthens, to the point that getting home comes to prompt you to eat a snack automatically, without giving it much prior thought; a habit has formed.

Habits are mentally efficient: the automation of frequent behaviours allows us to conserve the mental resources that we would otherwise use to monitor and control these behaviours, and deploy them on more difficult or novel tasks. Habits are likely to persist over time; because they are automatic and so do not rely on conscious thought, memory or willpower.  This is why there is growing interest, both within and outside of psychology, in the role of ‘habits’ in sustaining our good behaviours.

So where does the magic ’21 days’ figure come from?

We think we have tracked down the source. In the preface to his 1960 book ‘Psycho-cybernetics’, Dr Maxwell Maltz, a plastic surgeon turned psychologist wrote:

It usually requires a minimum of about 21 days to effect any perceptible change in a mental image. Following plastic surgery it takes about 21 days for the average patient to get used to his new face. When an arm or leg is amputated the “phantom limb” persists for about 21 days. People must live in a new house for about three weeks before it begins to “seem like home”. These, and many other commonly observed phenomena tend to show that it requires a minimum of about 21 days for an old mental image to dissolve and a new one to jell.’ (pp xiii-xiv)

How anecdotal evidence from plastic surgery patients came to be generalised so broadly is unclear.  One possibility is that the distinction between the term habituation (which refers to ‘getting used’ to something) and habit formation (which refers to the formation of a response elicited automatically by an associated situation) was lost in translation somewhere along the line. Alternatively, Maltz stated elsewhere that:

‘Our self-image and our habits tend to go together. Change one and you will automatically change the other.’ (p108)

Perhaps readers reasoned that, if self-image takes 21 days to change, and self-image changes necessarily lead to changes in habits, then habit formation must take 21 days. Although ‘21 days’ may perhaps apply to adjustment to plastic surgery, it is unfounded as a basis for habit formation. So, if not 21 days, then, how long does it really take to form a habit?

Researchers from our department have done a more rigorous and valid study of habit formation (Lally, van Jaarsveld, Potts, & Wardle, 2010). Participants performed a self-chosen health-promoting dietary or activity behaviour (e.g. drinking a glass of water) in response to a once-daily cue (e.g. after breakfast), and gave daily self-reports of how automatic (i.e. habitual) the behaviour felt. Participants were tracked for 84 days. Automaticity typically developed indistinct pattern: initial repetitions of the behaviour led to quite large increases in automaticity, but these increases then reduced in size the more often the behaviour was repeated, until automaticity plateaued. Assumed that the point, at which automaticity is highest, is also the point when the habit has formed, it took, on average, 66 days for the habit to form. (To clarify: that’s March 6th for anyone attempting a New Year’s resolution.)

Interestingly, however, there were quite large differences between individuals in how quickly automaticity reached its peak, although everyone repeated their chosen behaviour daily: for one person it took just 18 days, and another did not get there in the 84 days, but was forecast to do so after as long as 254 days.

There was also variation in how strong the habit became: for some people habit strength peaked below the halfway point of the 42-point strength scale and for others it peaked at the very top. It may be that some behaviours are more suited to habit formation – habit strength for simple behaviours (such as drinking a glass of water) peaked quicker than for more complex behaviours (e.g. doing 50 sit-ups) – or that people differ in how quickly they can form habits, and how strong those habits can become.

The bottom line is: stay strong. 21 days is a myth; habit formation typically takes longer than that. The best estimate is 66 days, but it’s unwise to attempt to assign a number to this process. The duration of habit formation is likely to differ depending on who you are and what you are trying to do. As long as you continue doing your new healthy behaviour consistently in a given situation, a habit will form. But you will probably have to persevere beyond January 21st.

Benjamin Gardner and Susanne Meisel

(www.ucl.ac.uk/hbrc/gardnerb)

 

References

Lally, P., van Jaarsveld, C. H. M., Potts, H. W. W., & Wardle, J. (2010). How are habits formed: Modelling habit formation in the real world. European Journal of Social Psychology, 40, 998-1009. (http://onlinelibrary.wiley.com/doi/10.1002/ejsp.674/abstract)

Maltz, M. (1960) Psycho-cybernetics. NJ: Prentice-Hall.

Sleep, sleep, glorious sleep…

By Susanne F Meisel, on 28 June 2012

All animals need it, we go crazy without it, yet, we don’t understand it well – no, I am not talking about love here, but a much less considered, although just as profound, need: The need for sleep.

Sleep is currently a ‘hot topic’ in science, because it appears that it is vital for all other major systems in our brains and bodies to function well – from how we feel , how well our muscles function, how well we concentrate, to the food choices we make.  Moreover, there is growing evidence that shorter sleep is linked with a large number of diseases, such as obesity, heart disease, cancer, lowered function of the immune system and mental health problems.

Although, as a nation overall, we sleep less than ever before, individuals vary substantially in the need for sleep –your partner may be chirpy after 7 hours, whereas you may need more to feel human.  Interestingly, variation occurs even in the same families and among siblings; this raises the question of whether genes play a role in determining how much sleep a person needs, because families usually share a very similar environment.  However, very few large studies have looked at what influences sleep early in life, when sleep is assumed to be mainly governed by the infant’s ‘body clock’.  Twins are especially useful to tease the question of ‘nature’ and ‘nurture’ apart, because twins are either 100% genetically identical; or they share half of their genes, just as ‘normal’ siblings.  Both, however, usually share the same environment, because they are born at the same time.  Our researchers used data from the GEMINI birth cohort, which includes twins from about 2000 families, to take a closer look at the genetic and environmental influences of sleep in young children.

Perhaps surprisingly, the results showed that sleep duration and daytime nap duration were mainly influenced by the environment. Likewise, sleep disturbance was due to environmental influences, although the genetic effect was slightly bigger than for sleep duration.  This was true for both girls and boys.  Although it could be argued that the carer’s schedule determines infants’ sleeping time, it would be expected that they would adjust bed-and nap times according to the infants’ needs.  Unfortunately there was no data available on when the infants actually went to sleep once they were put to bed, so we cannot say for sure how long they actually slept.

This study shows that, as so often, nature and nurture both act together to influence how we behave; in this instance, how much and how well we sleep.  Nonetheless, the study is important, because it shows that being a ‘morning vs. evening’ type person is indeed influenced to an extent by genes and this is apparent already very early in life. However, what is more important, the study clearly shows that the home environment is a crucial factor for providing children with a good night’s sleep. So, it might be wise to practice good ‘sleep hygiene’ (and that is not only true for kids): Remove the TV from the bedroom, have a consistent bedtime routine, put your kids to bed before 9pm if they are under 10 years old, let them fall asleep without anyone present, and limit (soft) drinks containing caffeine.  That will, hopefully, help your kids, and ultimately you, too, to get the well-deserved snooze.

 

Source

http://pediatrics.aappublications.org/content/early/2012/05/09/peds.2011-1571.abstract

Welcome

By Sam G Smith, on 1 September 2011

Hello and welcome to the Health Behaviour Research Centre blog! We are an academic research group made up of psychologists, epidemiologists, nutritionists, physiologists and health promotion experts. Our research is focused on behaviours related to health (such as diet, smoking, exercise, cancer screening and symptom reporting), particularly those that are related to cancer. We are funded mainly by Cancer-Research UK as well as other organisations such as the Medical Research Council, the National Institute of Health Research, Biotechnology and Biological Research Council and the British Heart Foundation.

 

In this blog we aim to bring to you snippets of research from our own areas of expertise so that you can keep up to date with the latest in the world of health behaviour research. If you would like to contact us, our e-mail addresses are at the end of each blog post or you can look us up at the official HBRC website.

Enjoy!