X Close

CBC Digi-Hub Blog

Home

Menu

E- and mHealth research at ISBNPA 2019

By Emma Norris, on 20 June 2019

By Laura M König, University of Konstanz, Germany

The 2019 Annual Meeting of the International Society for Behavioral Nutrition and Physical Activity (ISBNPA) took place from 4th to 7th June 2019 in Prague, Czech Republic. The conference was attended by delegates from all over the world who share an interest in advancing the study of behavioural nutrition, physical activity and sedentary behaviour. Amongst others, the programme featured a large number of sessions focusing on digital health promotion research. Presentations covered both advances in digital tools for health behaviour assessment and digital intervention studies.

Digital tools for health behaviour assessment: Smartphones, wearables, and digital voice assistants

Several delegates discussed the importance of digital tools for health behaviour assessment in real-life and showcased their latest research. Simone Verswijveren chaired a symposium on novel techniques to assess physical activity patterns, in which she and Prof Alan Donnelly highlighted the challenge of extracting meaningful patterns from activity tracker data. Specifically, Prof Donnelly pointed out that different thresholds for segmenting the continuous data stream may lead to different conclusions. Using a case study with adolescents, he demonstrated that acceleration-based thresholds might be better suited to determine the amount of moderate to vigorous physical activity than the number of steps to reflect the intensity of physical activity accurately.

Another symposium chaired by Dr Christina Pollard focused on opportunities that digital assessment of eating behaviour might provide for improving our understanding of eating behaviour and designing more effective digital eating behaviour interventions. Prof Carol Boushey showcased an image capturing tool that she and her team developed to collect dietary data in real-time, real-life situations to avoid biases in reporting and reduce user burden. In addition, she highlighted the richness of data this tool provides over and above data on food intake. For example, the pictures also provide information about the eating environment, which can be used to stimulate further discussions with participants. Prof Britta Renner and Prof Deborah Kerr presented further case studies showing that digital eating behaviour assessment may also provide information about important determinants of food choice such as eating enjoyment, as well as the exact timing of food intake, which might be harnessed in future digital interventions.

Another innovative way of assessing dietary intake was proposed by Prof Dori Steinberg, who’s team conducted a feasibility study using Amazon Echo’s voice assistant Alexa to record food intake. The voice assistant was connected to a food journal, allowing participants to enter ingredients and portion sizes by talking to the assistant. Participants perceived most recordings to be accurate and expressed confidence in using voice assistants for recording their diet. They especially valued that using a voice assistant saves time, however they also expressed concerns regarding the limited ability to correct entries if the voice assistant misunderstood a food or portion size. Still, as more and more people own a digital voice assistant, acceptability and feasibility of this tool should be explored further.

 

Advances in digital health promotion: Communication, collaboration, and scalability

Determining the success of a digital intervention is crucial for intervention developers in both academia and industry. However, do they always align in their criteria for success? This question was discussed by a panel chaired by Dr Camille Short. Before the discussion, the three panellists gave short introductions and case studies. First, Dr Marta Marques discussed challenges of academia-industry partnerships using the NoHow study as an example. She highlighted that academic and industrial partners may have different expectations and prerequisites regarding a project – communication and willingness to compromise are key! In the following presentations, Prof Melanie Hingle and Dr Heather Patrick represented the viewpoints of academia and industry, respectively. On the one hand, Melanie Hingle underlined the importance of adopting rigorous methods of testing digital interventions. On the other hand, Heather Patrick pointed out that study participants are expected to use the intervention for longer than consumers will probably use the app in real life and reminded the audience that one month already is a long time in the digital (industry) world. Thus, academics might need to adopt new and faster methods for testing interventions besides RCTs. See Emma Beard’s recent related Digi-Hub blog on methods considerations for digital health research here.

In a number of talks, results of feasibility studies and RCTs testing digital interventions were presented. For example, Prof Falk Müller-Riemenschneider presented results of a nationwide physical activity promoting programme in Singapore carried out by the local Health Promotion Board. In the programme, physical activity is promoted using award-based challenges in which the residents of Singapore can take part using a smartphone app and an activity tracker. An impressive data set collected from almost 400,000 participants showed that the programme was successful in increasing daily steps by more than 1,000 between the pre- (August – October 2017) and post-intervention (April – June 2018) periods and thus underlines the potential and scalability of digital interventions.

Finally, in her invited Early Career Researcher talk, Dr Marta Marques highlighted the importance of reporting standards for advancing behavioural science. She introduced the audience to the ontologies that are currently being developed within the Human Behaviour-Change Project and related research projects. By developing ontologies of intervention components and creating intervention databases building on these ontologies, behavioural scientists will be able to identify research gaps more easily and to derive successful intervention components, which will inform effective large-scale behaviour change and prevention programmes.

The ISBNPA 2019 Annual Meeting was an exciting opportunity to hear about the latest research in health promotion and digital intervention research. If you like to learn more about the research presented at the conference, check out the conference hashtag #ISBNPA2019 on Twitter and take a look at the presentations made available for download on the Open Science Framework.

 

Some questions to reflect on:

  • Are there different activity thresholds for different study populations? If yes, how can we best determine them to increase accuracy of our data?
  • How can we best balance accuracy and feasibility in digital dietary assessment?
  • What concerns might potential study participants have regarding image- or voice-based recordings of their food intake, especially regarding data security and privacy? How to alleviate these concerns?
  • How can fruitful academia-industry partnerships be established and maintained?

 

Bio:

Dr Laura M König (@lauramkoenig) is a postdoctoral researcher at the University of Konstanz, Germany. Her research focuses on how to promote the uptake and prolonged use of mobile interventions for eating behaviour change. She is particularly interested in reducing participant burden by making interventions simpler and more fun.

 

 

 

Design and statistical considerations in the evaluation of digital behaviour change interventions

By Emma Norris, on 18 June 2019

By Dr Emma Beard – University College London

Devices and programs using digital technology to foster or support behaviour change have become increasingly popular. Evaluating their effectiveness is often more complex than for face-to-face interventions where the ‘Gold standard’ randomised controlled trial can be used. With digital interventions we often have repeated measures over long periods of time which results in data with a complex internal structure: season effects, underlying trends and clustering (or autocorrelation). Drop out (leading to loss of power) and confounding are also a problem with natural experiments. This has effects on how we interpret findings in terms of casual effects but also in the presence of null results.

Several novel statistical techniques and study designs are available to help gain insight into the effects of specific digital intervention components on the causal mechanisms influencing outcomes and to assess the association between outcomes and measures of usage. At the 2019 CBC Conference “Behaviour Change for Health: Digital and other Innovative Methods” we presented a symposium which aimed to cover some of the main statistical issues of analysing digital interventions and also presented on some of the designs most commonly used to evaluate digital therapeutic apps.

Time series analysis

To account for underlying trends, seasonality and autocorrelation we can look towards time series models which are commonly used in financial forecasting and to assess the effect of population policies and interventions. Analyses included Autoregressive Integrated Moving Average (ARIMA)/ Autoregressive Integrated Moving Average with Explanatory Variable (ARIMAX)/ Generalised Additive Mixed Models (GAMM) and can be easily applied in most statistical packages, including R (e.g. TSA, Forecast and mgcv packages). GAMM is simply an extension of Generalised Linear Mixed Model (GLMM) which has the added benefit of adjusting for seasonality using data driven smoothing splines comprised of a series of knots. ARIMA/ARIMAX can be viewed as regression models which have one or more autocorrelation term (i.e. values closer in time tend to be more similar).

Dr Olga Perski presented a series of N-of-1 trials using GAMM which assessed within-person predictors of engagement with the Drink Less app. It was found that different app-related and psychological variables were significant predictors of the frequency and amount of engagement within and between individuals (e.g. the receipt of a daily reminder and perceived usefulness of the app were predictors of frequency of engagement). These results suggest that different strategies to promote engagement may be required for different individuals.

 

Missing data

The second issue covered was drop-out/missing data. There are three mechanism of missing data which each have implications for the analysis. The first is data missing completely at random. This occurs when the propensity for a data point to be missing is completely random (e.g. a participant flips a coin and decides whether to answer a question or not). The second is data missing at random. This occurs when the propensity for a data point to be missing is not related to the missing data, but it is related to some of the observed data (e.g. older people are less likely to answer questions about their income, but it does not depend on participants income level). The third is missing not at random. This happens when the propensity for a data point to be missing is related to the missing data (e.g. participants with severe depression are more likely to not answer questions on depression).

Most commonly people handle missing data using listwise deletion (analysis on complete cases) or pairwise deletion (missing value deletion is considered separately for each pair of variables). Although pairwise is preferred over listwise (increased power) both assume data are missing not at random. An alternative is the use of multiple imputation. This is applicable when data are missing completely at random or at random (note: there is some bias for missing at random but this is negligible). Missing imputation follows several stages: 1) select a group of variables to predict the missing values, 2) predicted values ‘imputes’ are substituted for the missing values, 3) repeat this for multiple imputed data sets, 4) run the analysis on each data set, 5) combine the results using ‘Rubin’s Rules’. If data are missing not at random the alternative approach is model the missingness but this leads to complex models and so generally data are assumed to be missing at random or completely at random.

 

Confounding

The third issue covered was confounding. The solutions discussed were 1) stratification, 2) multivariable analysis, and 3) propensity score matching. The objective of stratification is to fix the level of the confounders and produce groups within which the confounder does not vary. You then evaluate the exposure-outcome association within each stratum of the confounder and use the Mantel-Haenszel (M-H) estimator to provide an adjusted result according to strata. If there is difference between the crude result and adjusted result (produced from strata) confounding is likely. But in the case that the crude result dose not differ from the adjusted result, then confounding is unlikely. Propensity score matching works by combining information on a number of variables (potential confounders) into a single score and then matches’ individuals on this score. The following caveats should be noted. First, it can be difficult to balance the treatment group in small samples or if the comparison groups are very different. There is a possibility that unknown, unmeasured and residual confounding still exists after matching and matching variables should be unrelated to the exposure but related to the outcome. Finally, propensity score matching cannot handle treatment defined as a continuous variable (e.g. drug dose), unless dosage is categorised. For an example of propensity score matching see https://www.ncbi.nlm.nih.gov/pubmed/22748518.

 

Bayes Factors

The fourth issue covered was null findings. No scientific conclusion can follow automatically from p>0.05. A non-significant p-value could reflect either no evidence for an effect or data insensitivity (i.e. low power/high standard error). One solution to this problem is the use of Bayes Factors. B can range from 0 to infinity and conventional cut-offs are available. B<0.3 is evidence for the null hypothesis, between 0.3 and 3 is evidence for data insensitivity and >3 evidence for the alternative hypothesis. What this means, is that if you have a p>0.05 and B>0.3 you should avoid terms such as ‘no difference’ or ‘lack of association’. If p>0.05 and B<0.3 you can use these terms. If you do not calculate a Bayes Factor you should state ‘the findings are inconclusive as to whether or not a difference/association was present’. Bayes Factors can be easily calculated using online calculators and generally require the specification of a plausible predicted value. This should be pre-registered e.g. on the Open Science Framework.

Dr Claire Garnett gave an example of using Bayes Factors to re-examine a dataset from the Drink Less app supplemented with extended recruitment. Bayes Factors calculated for the extended trial (total n=2586; 13.2% responded to follow-up) supported there being no large main effects on past week alcohol consumption (0.22<BF<0.83).

 

Novel trial deigns

Randomised controlled trials are a poor fit for digital interventions because they (i) do not allow the app to continually improve from the data gathered during the trial and (ii) results only allow us to understand the effectiveness of the whole app, and not individual components. Dr Henry Potts discussed a scoping review that explored how novel trial designs are implemented for digital therapeutic apps. These included: Sequential Multiple Assignment Randomised Trials (SMARTs) for dynamic treatment regimens; micro-randomisation trials (MRT) for ‘just-in-time’ push notifications; N-of-1 and series of N-of-1 for personalisation of apps; randomised response adaptive trials for allocating more patients the most effective app, Multiple Optimisation Strategy (MOST) framework and Multi-Armed Bandit Models for building and optimising apps as complex interventions. He concluded that more micro-randomisation trials and implementations of the MOST framework are emerging in the literature as trial designs for both the development and evaluation of apps. He considered how multi-arm trials, with options of interim analysis and response-adaptive randomisation, may have potential here as well.

Thanks to all panelists at the symposium: Dr Emma Beard, Dr Olga Perski, Dr Claire Garnett & Dr Henry Potts, UCL

 

Questions

  • How can we improve the development and analysis of digital interventions?
  • How can we make the approach to digital interventions more scientifically rigorous?
  • How do we deal with big data?

 

Bio

Emma Beard (@DrEVBeard)  is a Senior Research Associate in the Department of Behavioural Science and Health at UCL. Her research focus on the application of novel statistical methodology to population surveys on tobacco and alcohol use.

 

 

Behaviour Change Techniques on the Go: Launch of a new version of the BCTTv1 App

By Emma Norris, on 4 June 2019

By Emma Norris, Dave Crane & Susan Michie, University College London

Changing behaviour is an immensely complex process, as we’ve seen in posts across the Digi-Hub blog! In order to describe and replicate behaviour change interventions, it is crucial to understand the component techniques that are delivered. However different terms may be used to describe the same techniques. For example, “behavioural counseling” may involve “educating patients” or “feedback, self-monitoring and reinforcement”.

Behaviour Change Techniques (BCTs) are observable and replicable components of an intervention designed to alter or redirect causal processes that regulate behaviour. They can be thought of as “active ingredients” of ‘what’ is delivered (e.g feedback, self-monitoring and reinforcement). BCTs can be used alone or in combination, via a variety of modes of delivery (e.g mobile apps, paper leaflets, face-to-face).

The Behaviour Change Techniques Taxonomy v1 (BCTTv1) was developed as a hierarchically structured classification of 93 BCTs. Developed via extensive expert consultation exercises, BCTTv1 has been used globally to describe the content of new interventions, as well as synthesise the content of already completed interventions via systematic reviews and meta-analyses. To date, the taxonomy has been cited almost 2000 times since its publication in 2013.

BCTTv1 app

A mobile app version of the taxonomy is freely available to allow more flexible use of the BCTTv1 by intervention developers and coders. This allows users to explore the structure of the taxonomy, as well as view definitions and examples of all BCTs.

New versions of the app have been recently launched on ITunes and Google Play. To date the app has been downloaded over 1700 times from each app store.

BCT Training Tool

A free training tool has also been developed http://www.bct-taxonomy.com/, providing a resource for intervention designers, researchers, practitioners, systematic reviews and all those wishing to communicate the content of behaviour change interventions.

 

Feedback on BCTTv1 Portal

BCTTv1 was developed with the understanding that, in a few years, feedback from international users would lead to the development of BCTTv2. In order to inform this development, we encourage users of BCTTv1 to submit information about their experiences within this portal: https://www.ucl.ac.uk/behaviour-change-techniques/bcttv1-feedback

Please do try out and share these free resources.

For queries on the app or tool, please contact contact@bct-taxonomy.com

 

Questions:

  • How would you use the app as part of your work?
  • What BCTs do you most commonly implement in your own studies?

 

Bios:

Dr Emma Norris (@EJ_Norris) is a Research Fellow on the Human Behaviour-Change Project at UCL’s Centre for Behaviour Change. Her research interests include the synthesis of health behaviour change research and development and evaluation of physical activity interventions.

 

 

 

 

Dr David Crane (@dhc23) studied under Susan Michie at UCL and founded and runs the Smoke Free app. His research focuses on understanding what behaviour change techniques, in what combination, frequency, duration and form are going to be effective for a particular individual at a particular moment in time.

 

 

Professor Susan Michie (@SusanMichie) is Professor of Health Psychology and Director of the Centre for Behaviour Change at UCL. Her research focuses on developing the science of behaviour change interventions and applying behavioural science to interventions. She works with a wide range of disciplines, practitioners and policy-makers and holds grants from a large number of organisations including the Wellcome Trust, National Institute of Health Research, Economic and Social Research Council and Cancer Research UK.

 

User Trust in Artificial Intelligence – Conceptual Issues and the Way Forward

By Emma Norris, on 14 May 2019

By Eva Jermutus – University College London

Artificial Intelligence (AI) is one of the technologies transforming healthcare by altering the way in which we use healthcare data, treat patients and develop diagnostic tools. It is often perceived as part of a solution to tackling healthcare issues such as increasing costs and staff shortages. Although AI appears promising in some areas, its potential and success do not only depend on the system itself but also the users’ trust in it. Consider, for example, clinical decision support systems (CDSS) that alert clinicians about potential drug-drug interactions. If used appropriately, the CDSS can help reduce prescribing errors. However, overtrust or undertrust in the tool can result in suboptimal decision-making, potentially causing harm. Accordingly, the operator’s trust in AI is a crucial variable determining whether or not – and – how an AI system is used, ultimately influencing its value to individuals and society.

The current article briefly defines the concept of trust, highlights some key issues affecting our understanding of user trust and suggests ways forward to building trust in AI. It concludes that we need appropriate rather than greater user trust in AI that reflects the current state of AI as well as the specific context of a trust-situation.

 

What is Trust?

While there is no agreed definition of trust, a few key aspects have emerged: Firstly, trust becomes relevant when a degree of uncertainty and risk is involved. Secondly, it is influenced by characteristics of the trustor (user), trustee (AI tool) and environment. Besides, trust is context- specific. That is, we may trust X to do Y, but not to do Z. Finally, the trustor must have some degree of decisional freedom to accept or reject the risk involved in trusting the other. A lack of such freedom would make trust irrelevant due the trustor having to rely on the trustee in the absence of alternatives.

The decision to trust an AI-driven tool depends, in part, on the tool’s trustworthiness (i.e. its attribute of being reliable and predictable). Previous research suggests that the trustworthiness of an AI-driven system is fostered by aspects such as competence, responsibility and dependability. Yet, the perception of these characteristics may be more important than the objective characteristics, highlighting the need to consider which factors contribute to users’ perception of a system’s trustworthiness.

A recent scoping review provides an initial overview of such personal, institutional and technological enablers and impediments of trust in digital health. While the aspects identified in the review are insightful, there are more fundamental issues that need to be considered if we are to understand user trust in AI.

 

Issues affecting our current understanding of trust and AI

On the one hand, trust research needs to be scrutinized. One of the key issues in this sphere is the lack conceptual clarity. Terms such as ‘transparency’ are often mentioned without explaining what exactly it is (e.g. is it transparency of the algorithm or the recommendation of the AI tool) and why it is important to our understanding of a system’s trustworthiness. Similarly, there is a mismatch between definitions and methodology used in studies with many studies not even defining what trust entails in their specific context. Failure to acknowledge and explain the differences between studies can obscure the specifics of trust in the field of AI, ultimately limiting our understanding of the phenomenon.

On the other hand, we need to consider the aspect of public trust in AI. AI has become an omnipresent phenomenon in every sphere of life. Yet, many people fail to understand what AI actually is. The lack of this understanding arises, in part, from the terminology used which mystifies the concept of AI. At the same time, we lack an understanding of what AI is capable of. We are often presented with scenarios where AI has gone bad, but what actually is the current technological state of AI? What is it capable of doing and what is merely a vision, ‘overhyping of AI’s potential’ or media-created narrative?

 

The way forward: How to build appropriate trust

Given these issues, building appropriate trust in AI requires a multi-level approach. On the level of AI, future research will have to investigate determinants of – and measures for – a system’s trustworthiness as well as ways in which the system can communicate its trustworthiness to the user. A pre-requisite for this endeavour will be conceptual clarity of trust and interwoven concepts such as transparency. Simultaneously, users will have to be educated about AI by demystifying underlying concepts and tackling occurrences of ‘overhyping’ AI’s potential and inaccurate media narratives to allow for a more factual representation of AI’s current capabilities. Training opportunities and increased engagement will further facilitate the creation of expert users. Finally, public trust in AI will require a legislative level addressing the concern of accountability in the event of failure as well as discouraging misuse of the available technology.

However, even if the aforementioned suggestions were implemented, we need to remind ourselves that technology is inevitably multi-use. AI is not inherently good or bad, but the way it is – or is not – put to use can be. There will be users with non-trust, that may utilize AI-tools in a manner where trustworthiness becomes irrelevant and only the fact that AI was employed matters, similar to tactical or political research utilization. Similarly, there will likely be users who adopt a trust strategy which results in too much or too little trust leading to human-induced errors that off-set AI benefits. The key aspect is that trust is dynamic and context-specific and as such we need to learn how to trust adaptively. The aim then should not be greater trustworthiness of systems and trust in AI, but appropriate trustworthiness that encourages users to trust when trust is warranted and to distrust when it is not.

 

A related symposium on “The role of trust and integrity in AI and health behaviour change” was held at the recent 5th CBC Conference on Behaviour Change for Health. Read more about this symposium and the conference at #CBCCONF19.

 

Questions

  1. How can a system communicate its trustworthiness?
  2. How can we motivate users to calibrate their trust? How do we approach users who deliberately ignore information regarding the system’s trustworthiness?
  3. What strategies can we use to counter “overhyping” the current state of AI?

 

Bio

Eva Jermutus is a PhD student in the Social Science Research Unit at UCL. Her work focuses on trust in Artificial Intelligence in the healthcare environment. @EJermutus

 

 

Digital Hoarding Behaviours: How can we measure and evaluate them?

By Emma Norris, on 7 May 2019

By Kerry McKellar – Northumbria University, UK

Digital hoarding has been defined as “…the accumulation of digital files to the point of loss of perspective, which eventually results in stress and disorganisation”. While physical hoarding has been extensively investigated, there has been recent speculation about the existence and potential problems related to digital hoarding. Clearly, unlike physical hoarding, there is no imposing impact on physical spaces, however individuals may still be negatively affected by excessive digital clutter. There can also be negative impacts for businesses if their employees have excessive amounts of data clutter, such as impacts on costs, data lifespan, productivity and knowledge management.

Designing a digital hoarding questionnaire:

The aim of our study was to develop and validate a new questionnaire that could identify digital hoarders and measure digital hoarding in the workplace. We wanted to gain an understanding of the scale of the problem and the potential consequences to both an organisation and the individual. We expected that digital hoarding would be predictive of workplace behaviours.

We conducted two studies in order to develop and validate our questionnaire.

 

Study 1:

424 UK participants in full or part-time employment completed the initial questionnaire online via Qualtrics. We developed the initial 12 statement questionnaire by adapting questions from the physical hoarding literature which focussed on the core facets of accumulation, clutter, difficulty discarding and distress (Frost & Gross, 1993Steketee & Frost, 2003). We conducted a principal component analysis which resulted in two scales ‘Difficulty deleting’ (6 items) and ‘accumulating’ (4 items). The first factor, difficulty deleting, evokes feelings of loss or distress when data is deleted and relates to the more emotional aspects of hoarding. The second factor ‘accumulation’ suggests that the mass collection of digital files is simply perceived as the more practical and low-effort solution to the management of data.

The second questionnaire was created as a way to assess the extent of digital hoarding in the workplace, asking about digital files stored, deletion behaviours, and beliefs about the consequences of digital hoarding to the self and the organisation. These questions were driven from previous literature of particular importance were qualitative findings from Sweeten et al. (2018) whose ‘5 barriers to deletion’ are included in section 3 of our questionnaire.

Therefore, this pilot study resulted in two more robust questionnaires better suited to the assessment of digital hoarding attitudes and behaviours. The final two questionnaires were the Digital Hoarding Questionnaire (DBQ) designed as a psychometric assessment of digital hoarding traits and attitudes; and a Digital Behaviours at Work Questionnaire (DBWQ) which included individual and workplace demographics and four sections on workplace hoarding behaviours and attitudes that measured (1) accumulation and storage behaviours; (2) deletion behaviours; (3) rationale for keeping emails and (4) perceived consequences for self and company.

Study 2:

203 UK participants in full or part-time employment who used a computer as part of their job completed the final revised questionnaires.  A random sample of 50 individuals were also asked to re-take the study again 6 weeks after first taking part so that we could establish the test-retest reliability of the scale.

We found significant correlations for the test re-test showing a good test consistency over 6 weeks. We also examined the differences between individuals who had data protection responsibilities (DPR) and those who don’t have DPR. We found those with DPR had higher amounts of read emails, unread emails, presentation files, photographs and total amounts of files. Those with DPR also scored higher in the two digital hoarding factors, difficulty deleting and accumulating. Interestingly, those with DPR and those who scored higher in the digital hoarding factors also perceived higher consequences to themselves and to their company if their files were accidently released.

Lastly, we examined the top five reasons why people don’t delete emails:

  1. They may come in useful in the future.
  2. They may contain information vital for their job.
  3. They may need ‘evidence’ that something has been done.
  4. They are worried they may accidently delete something important.
  5. They feel a sense of professional responsibility towards them.

 

Conclusions:

The DHQ and the DBWQ were found to provide an accurate assessment of digital hoarding behaviours, showed good evidence of reliability and clearly distinguished between those with and without data protection responsibilities. The DBWQ could enable organisations to gain a quantitative understanding of the amount and type of files that employees are routinely keeping, and to explore subgroups in the organisation. For example, we found that employees with DPR retain significantly more information than employees without DPR. This is surprising as while it is expected they may have more data, there is no reason for them to retain it and with their specialist knowledge of data protection, we may expect them to actually delete more data.

There is a strong sense that people perceive these hoarding behaviours as harmless, fuelled in part by the fact that digital storage is cheap and search engines are fast, meaning individuals perceive benefits to storing increased amounts of digital data. However, employee hoarding behaviour is likely to become troublesome with the roll out of new privacy and data protection legislation that regulates the storage of personal data (e.g. GDPR in Europe). This could mean that both organisations and individuals could be unwittingly storing data illegally. However, we simply don’t yet understand the scale of the issue, including the kinds of material individuals hoard and the associated workplace risks. Further work is needed to fully understand workplace digital hoarding.

You can read the full paper in Computers in Human Behavior here.

 

Questions:

  • Do you think it is necessary to keep a high amount of digital files?
  • Are there more benefits or disadvantages of keeping an increasing amount of digital files?
  • Can we design out digital hoarding?

 

Bios:

Dr Kerry McKellar is a Post Doctoral Researcher in PaCT lab at Northumbria University, working on projects associated with cybersecurity, health and online behaviours.  @KerryMcKell

 

 

 

 

 

Professor Nick Neave is Director of the Hoarding Research Group at Northumbria University.

 

 

 

 

 

 

Dr Liz Sillence is Senior Lecturer in the Department of Psychology at Northumbria University. She is member of the Psychology and Communication Technologies Lab, and an eHealth researcher @beehivewife 

 

 

 

 

 

Professor Pam Briggs is Chair in Applied Psychology at Northumbria University. She is founder member of the UK’s Research Institute in the Science of Cybersecurity (RISCS) @pamtiddlypom