X Close

Digital Education team blog


Ideas and reflections from UCL's Digital Education team


AI and society – the Asilomar Principles and beyond

By Samantha Ahern, on 2 February 2017

This has been quite an enlightening week. I have added my name in support of the Asilomar AI Principles (https://futureoflife.org/ai-principles/) and attended a British Academy / The Royal Society big debate ‘Do we need robot law?’ (http://www.britac.ac.uk/events/do-we-need-robot-law).

This is against the backdrop of the European Parliament’s Legal Affairs Committee EU rules for the fast-evolving field of robotics to be put forward by the EU Commission (Robots: Legal Affairs Committee calls for EU-wide rules) and reports of big data psychometric profiling being used in the US Presidential election and EU referendum (https://motherboard.vice.com/en_us/article/how-our-likes-helped-trump-win).

This raises a number of questions around ethics, liability and criminal responsibility.

A sub-set of AI, machine learning, is ubiquitous in its use in everyday tools that we use such as social media and online shopping recommendations, but only 8% of the UK population are aware of the term, as noted at The Royal Society panel discussion Science Matters – Machine Learning and Artificial intelligence. Recent high profile advances in machine learning, for example AlphaGo, have utilised a technique known as deep learning which predominantly uses deep neural networks, an evolution of artificial neural networks that were first introduced in the 1950s (for more about the history of AI please see: https://skillsmatter.com/skillscasts/5704-lightning-talk).  To all intents and purposes these are black box algorithms as these systems teach themselves and the exact nature of what is learned is unknown, this is known issue where these systems are used for automated decision making.  In addition to this being ethically dubious when these decisions relate to living beings, they may also fall foul of the upcoming General Data Protection Regulations (Rights related to automated decision making and profiling).

Professor Noel Sharkey has stated that we are on the cusp of a robot revolution.  Robots are shifting from the production line, to our homes and service industries. There has been a lot of development of care robots, particularly in Japan, and is an active area of research (e.g. Socially Assistive Robots in Elderly Care: A Systematic Review into Effects and Effectiveness).  The introduction of robots into shared spaces has an impact on society.

A lot has already been said and written about decision making processes of autonomous vehicles and the ethics of the decisions made, including myself (https://blog.swiftkey.com/slam-self-driving-cars-ai-and-mapping-on-the-move/), but there are still a level of uncertainty especially with regards to the law as to who is responsible for these vehicles’ actions.  What is less discussed is the potential impact of the roll-out of these vehicles on wider society; for example 3m jobs in the USA alone may be lost due to autonomous vehicles.

Unlike human beings, AI systems, include robots, do not have a sense of agency (What Is the Sense of Agency and Why Does it Matter?), this can cause difficulties within society as behaviours of robots or autonomous vehicles may be different from expected societal norms, causing a disruption to society and values. It also introduces ambiguity around liability and criminal responsibility.  If an AI does not have an understanding of how its actions are having a negative impact on society, how can it be helped accountable? Alternatively, are the developers or manufacturers then accountable for an AI’s behaviour? This is known as liability diffusion.

If an AI is capable of learning a sense of agency, what is an acceptable time-frame for this learning to take place in and what will be an acceptable level of behaviour and/or error until sufficient learning has taken place?

The one thing that is clear, is that these emerging technologies will be disruptive to all areas of society, and could be considered a Pandora’s box but also has the potential to bring huge benefits to the whole of society. As a result there are a number of organisations that are considering the potential impacts of these technologies; these include the Machine Intelligence Research Institute (https://intelligence.org/) and Centre for the Study of Existential Risk (http://cser.org/), plus organisations such as OpenAI (https://openai.com/about/) that are making these technologies available to all.

For further reading on where we are now and where we are heading I recommend Murray Shanahan’s book ‘The Technological Singularity’ , and Danny Wallace’s documentary  ‘Where’s my robot?‘ .



One Response to “AI and society – the Asilomar Principles and beyond”

Leave a Reply