X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for May, 2022

Is AI trustworthy? A discussion on AI and Faith at the UKRI CDT in Foundational AI at UCL Showcase in April 2022 at Cumberland Lodge

By sharon.betts, on 3 May 2022

Is AI untrustworthy? Why? How do we build trust in AI? Can AI actually be trustworthy, or only perceived to be so?

Since the beginning of 2021 a group of students in AI and Science and Technology Studies have been running monthly discussion forums at the UCL Centre for Artificial Intelligence (AI Centre) on the big questions of how AI will influence Society, see schedule here. Naturally, most questions are rooted in the morals of the societies we live in.

Recently, we have been looking into how faith and religion influences the trust in our relationship with AI as an individual, but also as a society. We took the chance at the recent Cumberland Lodge showcase of the UKRI CDT in Foundational AI to discuss this trust relationship with the about 100 AI researchers in attendance.

We held a world café-style session, asking the following 6 questions: 

  1. Is AI untrustworthy? Why?
  2. Will AI be adopted like any other technology e.g. gene editing, steam trains?
  3. How do we build trust in AI?
  4. Can AI actually be trustworthy, or only perceived to be so?
  5. Do you think that faith has a place in a discussion on AI, ethics, and responsibility?
  6. “AI is helping us to understand God’s world. AI is bringing us closer to God.” Discuss.

These questions were placed in 6 different rooms, and the rules are that you can move between the rooms, but at least 1 person has to be in each room. 

We found that there was healthy engagement and debate, with these questions leading to discussions such as – does AI need to be reproducible and open, or does the ability to run test cases negate the need for this? And questions about what it means to consider faith and AI at all (we found that a group of people who identified as atheists were really interested in this topic and the different ways that they could engage). Others discussed whether AI could ever be trustworthy, given that humans use and create AI, and we can’t make a blanket statement that humans are trustworthy. Others opened a discussion about trusting different aspects of AI, for instance, you can trust that deepfakes work well, but also distrust them as a technology as they can impersonate high profile people and try to swing elections. 

It was really interesting to hear the interplay between some really technical discussions combined with social elements, and the way that technical solutions vs external (policy) solutions were discussed. 

In the coming months, we will continue our discussion on AI and Faith. We hope to use the grant we were given by UCL Grand Challenges to reach out beyond the AI Centre to underrepresented voices in the AI and Ethics discussion whose lives will be affected by AI. 

We are also grateful for Tyler Reinmund for joining us on the day. He runs the RTI Student Network, an international and interdisciplinary student-led organisation based at the University of Oxford with the goal of connecting students interested in topics related to responsible research and innovation. They, too, hold monthly reading groups to discuss literature from fields such as artificial intelligence and machine learning, philosophy, law, economics, and the health sciences, and facilitate work-in-progress seminars for research students.

Learn more at https://www.rti.ox.ac.uk/student-network/ or email admin-studentnetwork@cs.ox.ac.uk

If you have more questions or suggestions or are interested in more events like these inside or outside the UCL AI Centre, feel free to get in touch with us.

Jaspreet Jagdev and Jakob Zeitler

Big thanks to Stephen Hughes and Elena Falco whose support made all of this possible, and the whole UCL AI Centre team (Sharon Betts, David Barber, Lopa Murgai) for facilitating these discussions. Thanks also to Adrian Weller at Cambridge for suggesting great directions to explore; we hope to collaborate in future sessions.