The Social Issues Around Artificial Intelligence: A UCL Town Meeting

The Social Issues Around Artificial Intelligence: A UCL Town Meeting

The Social Issues Around Artificial Intelligence was a UCL town meeting on AI, organised under the auspices of UCL’s Grand Challenge of Transformative Technology (GCTT). It was held on Wednesday 6 June 2018 from 5.30-7.00pm in the UCL Roberts 106 Lecture Theatre.

This town meeting, hosted by the UCL Grand Challenge of Transformative Technology, addressed ‘The Social Issues Around Artificial Intelligence’. It brought together UCL academics from philosophy, science, statistics, education and law to present short provocations on the impact of artificial intelligence on society.

Opening remarks were made by Professor Jon Agar (UCL Science & Technology Studies and Co-Chair of the UCL Grand Challenge of Transformative Technology Working Group) who highlighted expectations of the prominent role that UK will play as a world leader in AI and its underpinning technologies. He noted that public and academic discourse on AI was frequently framed in terms of concerns about its impact on privacy and social cohesion. However, the use and impact of AI in society can be much brighter and positive.

Professor Noreena Hertz described how her interest in AI came from co-developing a social media-based system to predict the winner of the X-Factor, and how from this experience her concerns grew on the impact that AI has on privacy.


The first panel provocation was delivered by Dr Jack Stilgoe (UCL Science & Technology Studies) on ‘Responsible research and innovation’. Stilgoe opened with a reflection on the terminology used for and in AI, and raised a question on how we can make good decisions with emerging technologies? He highlighted how technology is a work-in-progress, and that many lessons are learnt through the deployment of a new technology. Sometimes these lessons are learnt in ‘the wild’, with the general public participating in real-life experiments, for example as represented by the fatal collision of an Uber vehicle in autonomous drive mode with a cyclist in Arizona. Through this accident, a number of engineering assumptions were revealed and tested. This accident also exemplified how a member of the public had unknowingly participated in a test without their consent. Stilgoe also suggested that technological transitions involved political choices based on expected beneficiaries from the implementation of a new technology. Stilgoe concluded that AI learnt whilst doing, and that machine learning is happy in ‘the wild’. However, both are largely private enterprises which need to be democratised.

Dr James Wilson (UCL Philosophy) gave the second provocation on the theme of ‘Personal data’. Wilson contextualised the concept of personal data with the use of Samaritans Radar as a case study. Samaritans Radar, an online app that tracked the tweets of its users and the user’s Twitter network to detect tweets that indicated or showed someone struggling to cope or showed signs of mental difficulties, raised questions on consent and whether the use of public tweets in the app needed the consent of the tweeter. Wilson also suggested that privacy was highly contextual, and relied on harm and exposure.

The third provocation, which was on ‘Algorithmic fairness and AI’ in the context of the interaction between AI and large data-sets’, was made by Professor Sofia Olhede (UCL Statistics; Big Data Institute). Olhede opened by explaining why data were collected (to address ways to complete a task or make better decisions), and how data can be used in either a predictive framework or an explanatory framework. Olhede raised three areas that are an issue for AI: transparency, fairness and bias. The bias issue links to the data sets used, which includes preferential data samples. The issues of fairness and bias link to the training method used for AI. As training relies on human decisions, biases are included, and therefore learnt and mimicked by AI. As it is hard to determine and call a bias, transparency is needed. Olhede mentioned how ‘transparency’ is a new concept and function to introduce in AI, and that it is hard to put into practice for many reasons. Olhede concluded by highlighting that predictive algorithms mainly reiterate steps, thus making it hard to disentangle the reasons why an algorithm-based system has failed, noting that that they are based on average performances.

 Dr Mutlu Cukurova (UCL Institute of Education) made the fourth provocation on ‘Innovation with educational technology’. Cukurova presented two themes: the first on AI and automation in education, and the second on AI in the design of education. Under the first theme, Cukurova suggested that all citizens should know the basics of AI (on what AI can and cannot do). Cukurova raised the issue of what skills young people needed to adopt in order to flourish in an AI world? He also raised whether, in the knowledge-based versus skills-based education debate, a few coding sessions will really equip young people with the necessary knowledge or skills for an AI world. He suggested that attention should be weighted towards skills development, in particular human development and intelligence. Regarding the use of AI in the design of education, Cukurova raised concerns on the use of big data to create and organise pedagogy/education, in particular the use of technologies that have embedded qualities. He also proposed using data to help learners and teachers understand their behaviour in order to learn how to adapt to different scenarios.

The final provocation was delivered by Professor Jonathan Montgomery (UCL Laws) on ‘Law and the professions’. Montgomery posed an opening question on the meaning of ‘profession’: scholastic knowledge versus the ability to apply the knowledge, and the use of autonomous interpretation of the knowledge. This question was exemplified through Montgomery’s case study of the reading of medical diagnostic scans in healthcare. Clinicians have held the role of interpreting scans to make a diagnosis, but now computers can deliver the diagnosis and leave the treatment plan to the clinician. Montgomery also highlighted that professional expertise has dwindled because materials have become easier to find, and that knowledge has become more accessible. He suggested that the role of professionals could be to perform social functions, also maintaining human accountability. The Regarding the problem of fairness, he suggested that it depended on an individual’s feelings on whether the process or decision is exploiting them or if it is being used to their benefit.

After the five provocations, Hertz chaired an audience Q&A session. The panel received questions on flexibility in AI, whether some decisions should be made solely by AI, and accountability. An audience member suggested that “AI is the new plastic”, and that the public will only face the reality of the consequences of AI decades later. Further questions were received on the hackability of AI, whether a moral framework system is needed, and who controls cloud systems and data.

Nenna Chuku
Grand Challenges Research Assistant


Photographs by James Paskins