X Close

CBC Digi-Hub Blog

Home

Menu

Preparing for the ‘Artificial Intelligence Society’ – what researchers should know about AI

By Emma Norris, on 14 August 2019

By Candice Moore & Emily Hayes – University College London, UK

Artificial Intelligence (AI) in is on the rise globally. The largest investments in AI development have been reported in China and the United States, with members of the European Union not far behind. In the UK, AI start-ups have flourished and a £1 billion package of support has been offered to industry and academia, through Government and private sector investment. The UK Government has also recently announced that £250m will be spent on AI integration within the NHS.

Academic institutions are paying attention to the AI boom. For example,  MIT has announced it is ‘reshaping itself for the future’ by establishing the new MIT Stephen A. Schwarzman College of Computing. The College will address the challenges and opportunities afforded by the growing prevalence and sophistication of AI, such as the need for ethical and responsible technologies. Our own university UCL has recently established two new Centres for Doctoral Training related to AI, specialising in Foundational AI and AI-enabled Healthcare Systems.

Researchers working on AI applications with social and health implications must be equipped to make responsible judgements about the technology they are creating. As researchers on the Human Behaviour-Change Project, working to synthesise our understanding of behaviour change interventions using machine learning and AI, we wanted to explore these implications further.  We attended a talk on the ethics of using AI for decision making at the UCL Interaction Centre in May 2019 by Ronald Baecker, Professor Emeritus of Computer Science and author of Computers and Society, We summarised the discussion for the Digi-Hub blog.

What society must require from AI:

Baecker argued that, when evaluating AI systems we must think carefully about the consequences the output of the system might have for society, and we should evaluate the system based on the qualities we would expect a human carrying out the same task to have. Baecker drew a useful distinction between ‘consequential’ and ‘not-so-consequential’ AI, based on the types of decisions AI make.

Not-so-consequential AI:

Not-so-consequential AI systems have been around for decades, and are used to carry out simpler tasks such as speech, image and pattern recognition. These kinds of AI systems do not require such in-depth ethical assessment. Virtual assistants, such as Apple’s Siri and Amazon’s Alexa, are examples of not-so-consequential AI. For the most part, the consequences of misinterpreted voice commands are low risk, if not frustrating. Novel applications of not-so-consequential are increasing, and include the use of facial recognition to manage queues at the bar!


       Caption: Demonstration of bar queue face recognition system. Customers are assigned a number based on when they arrive in the queue 

 

Consequential AI:

More recently, AI systems have been created for a number of more complex tasks which have important societal consequences. Baecker gave a number of examples of such systems:

Baecker argues that we should expect consequential AI systems to have the same qualities as a human carrying out the same tasks. According to Baecker, these systems should have ‘common sense, empathy, sensitivity to others, compassion, and a sense of fairness and justice’.

For example, the Allegheny Family Screening Tool (AFST) uses predictive analytics to assign children a ‘risk score’, which aims to quantify the likelihood of the child being placed away from their home, based on their referral records. This risk score is used to guide clinicians when making judgements about the family’s case, with scores beyond a certain threshold being followed-up mandatorily. Arguably, a human carrying out this task should act in a fair manner and without bias. We should therefore expect the same from the system, as mistakes could have grave consequences for children and their families.

In his talk, Baecker gave a detailed set of criteria against which consequential AI should be judged:

 

1) Competence, dependability and reliability: AI systems should be expected to make reliable and dependable judgements. This is difficult to achieve using current artificial intelligence methods as machine-learning can be ‘greedy’, requiring a lot of training data to produce accurate results, and lacking in innate knowledge/common sense.

2) Openness, transparency and explainability: Currently, for machine-learning methods, openness, transparency and explainability can be difficult to achieve as information is represented in a distributed, less-penetrable manner. Since 2016, Darpa have been leading a major research project, Explainable Artificial Intelligence (XAI), which aims to resolve this issue.

3) Trustworthiness: For AI systems to be effectively used by humans, we need to have a good understanding of the level of trust we can put into them. Our colleague, Eva Jermutus, recently wrote a blog post on this issue. Baecker argues that human computer interaction and computer science research must be more collaborative to address this.

4) Responsibility and accountability: To use AI systems ethically, we need to think carefully about who is responsible if things go wrong. For example, who is accountable for complex systems, used and developed by multiple individuals? Should programmers pay for any wrongdoing, or is the onus on users to act responsibly?

5) Sensitivity, empathy, and compassion: In some cases, AI systems are required to appear compassionate to the user.  For example, when used to fulfil a caring role. Many systems designed for these purposes are anthropomorphised, or zoomorphised, to create this impression, However, this should be handled carefully as attempts to anthropomorphise robots often result in uncanny experiences.

6) Fairness, justice and ethical behaviour: Creating fair and just algorithms can be difficult, and AI systems often end up replicating human biases. For example, there have been debates over whether, COMPAS, an offender risk-assessment tool, is biased towards assessing black candidates as high risk.

 

Implications for researchers

  • ‘AI systems are only as good as the data we put into them’ and we need to moderate decisions made by AI systems through human intervention. However, new technologies are being produced exponentially, and there are insufficient resources to evaluate them. Researchers should continue to develop efficient, ‘best practice’ evaluation frameworks that mitigate the issue of ‘explainability’ in AI. Adequate resources will be required to apply these frameworks in practice.
  • Baecker argues that public knowledge about AI decision-making should be increased. This will help resolve unfounded distrust in AI, whilst empowering individuals to challenge the use of biased or inappropriate technology. Research findings must therefore be disseminated widely and clearly, without using jargon.
  • The evaluation of AI decision-making is just one piece of a complex, ethical puzzle. Other considerations include the use and storage of personal data, and the automation of labour. Currently, it is uncommon for computer science courses to include ‘ethical AI’ on their syllabus, and Baecker concluded by stating that we should be training students and young researchers to be able to grapple with these issues.

 

Bios:

Candice Moore is a Research Assistant at UCL working on the Human Behaviour Change Project which aims to use AI methods to advance behavioural science research. She completed an MSc in Cognitive and Decision Science which involved designing and coding an experiment on causal perception. Previously she has worked on a variety of research projects in developmental psychology, including a large-scale educational intervention.

 

 

Emily Hayes is a Research Assistant at UCL working on the Human Behaviour Change Project, which aims to use AI methods to advance behavioural science research. She completed an MSc in Health Psychology at UCL in 2017 and her research examined older adult health literacy, in relation to childhood and adulthood socioeconomic position. Prior to joining the HBCP, Emily worked for a digital health start-up that provides an app for medication management and adherence. She has also worked on a Knowledge Transfer Partnership, investigating workplace wellbeing through qualitative and quantitative research. She is interested in digital health and behaviour-change interventions to reduce health inequalities.

 

 

Leave a Reply