X Close

CBC Digi-Hub Blog

Home

Menu

User Trust in Artificial Intelligence – Conceptual Issues and the Way Forward

By Emma Norris, on 14 May 2019

By Eva Jermutus – University College London

Artificial Intelligence (AI) is one of the technologies transforming healthcare by altering the way in which we use healthcare data, treat patients and develop diagnostic tools. It is often perceived as part of a solution to tackling healthcare issues such as increasing costs and staff shortages. Although AI appears promising in some areas, its potential and success do not only depend on the system itself but also the users’ trust in it. Consider, for example, clinical decision support systems (CDSS) that alert clinicians about potential drug-drug interactions. If used appropriately, the CDSS can help reduce prescribing errors. However, overtrust or undertrust in the tool can result in suboptimal decision-making, potentially causing harm. Accordingly, the operator’s trust in AI is a crucial variable determining whether or not – and – how an AI system is used, ultimately influencing its value to individuals and society.

The current article briefly defines the concept of trust, highlights some key issues affecting our understanding of user trust and suggests ways forward to building trust in AI. It concludes that we need appropriate rather than greater user trust in AI that reflects the current state of AI as well as the specific context of a trust-situation.

 

What is Trust?

While there is no agreed definition of trust, a few key aspects have emerged: Firstly, trust becomes relevant when a degree of uncertainty and risk is involved. Secondly, it is influenced by characteristics of the trustor (user), trustee (AI tool) and environment. Besides, trust is context- specific. That is, we may trust X to do Y, but not to do Z. Finally, the trustor must have some degree of decisional freedom to accept or reject the risk involved in trusting the other. A lack of such freedom would make trust irrelevant due the trustor having to rely on the trustee in the absence of alternatives.

The decision to trust an AI-driven tool depends, in part, on the tool’s trustworthiness (i.e. its attribute of being reliable and predictable). Previous research suggests that the trustworthiness of an AI-driven system is fostered by aspects such as competence, responsibility and dependability. Yet, the perception of these characteristics may be more important than the objective characteristics, highlighting the need to consider which factors contribute to users’ perception of a system’s trustworthiness.

A recent scoping review provides an initial overview of such personal, institutional and technological enablers and impediments of trust in digital health. While the aspects identified in the review are insightful, there are more fundamental issues that need to be considered if we are to understand user trust in AI.

 

Issues affecting our current understanding of trust and AI

On the one hand, trust research needs to be scrutinized. One of the key issues in this sphere is the lack conceptual clarity. Terms such as ‘transparency’ are often mentioned without explaining what exactly it is (e.g. is it transparency of the algorithm or the recommendation of the AI tool) and why it is important to our understanding of a system’s trustworthiness. Similarly, there is a mismatch between definitions and methodology used in studies with many studies not even defining what trust entails in their specific context. Failure to acknowledge and explain the differences between studies can obscure the specifics of trust in the field of AI, ultimately limiting our understanding of the phenomenon.

On the other hand, we need to consider the aspect of public trust in AI. AI has become an omnipresent phenomenon in every sphere of life. Yet, many people fail to understand what AI actually is. The lack of this understanding arises, in part, from the terminology used which mystifies the concept of AI. At the same time, we lack an understanding of what AI is capable of. We are often presented with scenarios where AI has gone bad, but what actually is the current technological state of AI? What is it capable of doing and what is merely a vision, ‘overhyping of AI’s potential’ or media-created narrative?

 

The way forward: How to build appropriate trust

Given these issues, building appropriate trust in AI requires a multi-level approach. On the level of AI, future research will have to investigate determinants of – and measures for – a system’s trustworthiness as well as ways in which the system can communicate its trustworthiness to the user. A pre-requisite for this endeavour will be conceptual clarity of trust and interwoven concepts such as transparency. Simultaneously, users will have to be educated about AI by demystifying underlying concepts and tackling occurrences of ‘overhyping’ AI’s potential and inaccurate media narratives to allow for a more factual representation of AI’s current capabilities. Training opportunities and increased engagement will further facilitate the creation of expert users. Finally, public trust in AI will require a legislative level addressing the concern of accountability in the event of failure as well as discouraging misuse of the available technology.

However, even if the aforementioned suggestions were implemented, we need to remind ourselves that technology is inevitably multi-use. AI is not inherently good or bad, but the way it is – or is not – put to use can be. There will be users with non-trust, that may utilize AI-tools in a manner where trustworthiness becomes irrelevant and only the fact that AI was employed matters, similar to tactical or political research utilization. Similarly, there will likely be users who adopt a trust strategy which results in too much or too little trust leading to human-induced errors that off-set AI benefits. The key aspect is that trust is dynamic and context-specific and as such we need to learn how to trust adaptively. The aim then should not be greater trustworthiness of systems and trust in AI, but appropriate trustworthiness that encourages users to trust when trust is warranted and to distrust when it is not.

 

A related symposium on “The role of trust and integrity in AI and health behaviour change” was held at the recent 5th CBC Conference on Behaviour Change for Health. Read more about this symposium and the conference at #CBCCONF19.

 

Questions

  1. How can a system communicate its trustworthiness?
  2. How can we motivate users to calibrate their trust? How do we approach users who deliberately ignore information regarding the system’s trustworthiness?
  3. What strategies can we use to counter “overhyping” the current state of AI?

 

Bio

Eva Jermutus is a PhD student in the Social Science Research Unit at UCL. Her work focuses on trust in Artificial Intelligence in the healthcare environment. @EJermutus

 

 

Leave a Reply