Mitigating sex and gender biases in artificial intelligence for biomedicine, healthcare and behaviour change
By CBC Digi-Hub Blog, on 20 July 2020
Written by Dr Silvina Catuara Solarz on behalf of the Women’s Brain Project
Over the past two decades, there has been an emergence of digital health tools for the prevention and management of chronic disease arising from both the academic and industry sectors. A particularly prolific area is digital health tools relating to the promotion of mental health as well as physical health, with a focus on behaviour change and habit formation.
A central role in the advancement of these digital health tools is played by Artificial Intelligence (AI) systems, which aim to identify patterns of behaviour and provide personalised recommendations to the user according to their profile, with a view to optimising health outcomes. Al is also accelerating the progress on a myriad of complex tasks in the biomedical field, such as image recognition for diagnosis, identification of gene profiles associated with vulnerability of disease and prediction of disease prognosis based on electronic health records, that are aligned with the precision medicine approach.
AI and digital health tools are promising means for providing scalable, effective and accessible health solutions. However, a critical gap that exists on the path to achieving successful digital health tools is the robust and rigorous analysis of sex and gender differences in health. Sex and gender differences have been reported in chronic diseases such as diabetes, cardiovascular disorders, neurological diseases, mental health disorders, cancer, and there are plenty of health areas that remain unexplored.
Neglecting sex and gender differences in both the generation of health data and the development of AI for use within digital health tools will lead not only to suboptimal health practices but also to discrimination. In this regard, AI can act as a double-edged sword. On the one hand, if developed without removing existing biases and accounting for potential confounding factors, it risks magnifying and perpetuating existing sex and gender inequalities. On the other hand, if designed properly, AI has the potential to mitigate inequalities by accounting for sex and gender differences in disease and using this information for more accurate diagnosis and treatment.
Our work, recently published in npj Digital Medicine, focuses on the existing sex and gender biases in the generation of biomedical, clinical and digital health data as well as AI-based technological areas that are largely exposed to the risk of including sex and gender biases, namely big data analytics, digital biomarkers, natural language processing (NLP), and robotics.
In the context of mental health and behaviour change, some efforts have been made to include a sex and gender dimension to the implementation of theoretical frameworks for social and behaviour change communication. Still, further collection of data of the influence of sex and gender on aspects such as user experience, engagement and efficacy of digital health tools will provide a valuable starting point for the identification of optimal paths for efficient and tailored interventions.
Active and passive data input from users can be explored to derive sex and gender-associated insights through NLP and digital phenotyping. While these insights will shed light on how to optimise digital health tools for individual users, attention must be paid to potential biases that may arise. For example, NLP inferences from textual data used for training algorithms (an approach that is frequently used by mental health chatbots) are known to incorporate existing sex and gender biases (e.g. gendered semantic context of non-definitional words like ‘babysitter’ or ‘nurse’).
To avoid undesired biases, we strongly recommend pursuing ‘explainability’ in AI. This refers to activities focusing on the uncovering of reasons why and how a certain outcome, prediction or recommendation is generated by the AI system, thus increasing the transparency of the machine decisions that are otherwise unintelligible for humans.
Finally, we advocate that awareness of sex and gender differences and biases is increased by incorporating policy regulations and ethical considerations at each stage of data generation and AI development, to ensure that the systems maximise wellbeing and the health of the population.
This article was written on behalf of the Women’s Brain Project (WBP) www.womensbrainproject.com, an international non-profit organisation based in Switzerland. Composed largely by scientists, WBP aims at raising awareness, stimulating a global political discussion and performing research on sex and gender differences in brain and mental health, from basic science to novel technologies, as a gateway for precision medicine.
Questions:
- Is sex and gender accounted for in available behaviour change apps ?
- Is sex and gender considered in the frameworks used in the evaluation of effectiveness of behaviour change apps ?
- What are the risks of excluding sex and gender data when developing and evaluating behaviour change apps? What are the potential privacy challenges associated with their inclusion?
Biography
Silvina Catuara Solarz holds a PhD in Biomedicine specialised in Translational Neuroscience by the Universitat Pompeu Fabra (Barcelona, Spain) and currently works as a Strategy Manager at Telefonica Innovation Alpha Health, a company focused on digital mental health solutions. As a member of the Women’s Brain Project executive committee team, she performs research on innovative technologies and their role in understanding sex and gender differences in health and disease. Her main interests include the application of digital technologies and AI into products to prevent and manage health conditions in a personalised and scalable way.
Find Silvina here:
https://www.linkedin.com/in/silvina-catuara-solarz/