X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Ethics education in taught courses – not just a STEM issue?

By Samantha Ahern, on 18 December 2018

On the 12th December I visited Central St Martins for the UAL Teaching Platform event Ethics in Arts, Design and Media Education. Much of the discourse at present is focused on ethics education in STEM discplines such as Computer Science and Data Science, or more predominantly the lack of meaningful education.  Much of this has been driven by growing concerns around the algorithms deployed in social media applications and seemingly rapid growth of AI based applications. The House of Lords AI report explicitly talks about the need for ethics education in compulsory education if society and not just the UK economy is to benefit.

I was intrigued by a potentially alternative viewpoint.

The role of the arts is to push the boundaries, but are there limits to artisitic expression?

Are rebellion and social responsibility mutually exclusive?

UAL seem to think not.

The focus of the day was ethics in the context of what students make and do, in postgraduate and undergraduate taught course contexts. UAL aim to entwine ethics into the creative process, developing ethics as lived practice.

One approach to this has been the development of the Bigger Picture unit which requires groups of students to undertake both collaborative practice and participatory design projects. Some of these projects required students to work with vulnerable members of society e.g. the homeless. How do we ensure that the participants equally benefit and not exploited? Throughout the unit students were encouraged to work collaboratively with these participants respectfully, honestly and with integrity. To enable this, explicit sections on ethical considerations were added to the unit handbook and project brief.

Additionally, UAL has been working on the development of an Educational Ethics Code and establishing an educational ethics committee.

The code has 3 main themes, these are:

  • Respect for persons
    • Respecting the autonomy of others
  • Justice
    • Does everybody benefit?
    • Are there privilege and power differences?
    • What social good will the project do?
  • Beneficence
    • The art of doing good and no harm

There was a general acknowledgement amongst the attendees that many of the ethical decisions we make are situation specific and timebound,with key consideration to be given to who is part of the conversation and who has got the power? Privilege and power are important considerations, especially when it comes to consent models, regardless of discpline.

It was also acknowledged that there is a fineline between support (e.g. timely guidance) and imposition (e.g. lengthy formal ethical review processes).

Attending this event made me wonder: is this just one part of a much wider debate around compassion and social responsibility? To my mind it is.

Event related readings:

 

 

 

AI and society – the Asilomar Principles and beyond

By Samantha Ahern, on 2 February 2017

This has been quite an enlightening week. I have added my name in support of the Asilomar AI Principles (https://futureoflife.org/ai-principles/) and attended a British Academy / The Royal Society big debate ‘Do we need robot law?’ (http://www.britac.ac.uk/events/do-we-need-robot-law).

This is against the backdrop of the European Parliament’s Legal Affairs Committee EU rules for the fast-evolving field of robotics to be put forward by the EU Commission (Robots: Legal Affairs Committee calls for EU-wide rules) and reports of big data psychometric profiling being used in the US Presidential election and EU referendum (https://motherboard.vice.com/en_us/article/how-our-likes-helped-trump-win).

This raises a number of questions around ethics, liability and criminal responsibility.

A sub-set of AI, machine learning, is ubiquitous in its use in everyday tools that we use such as social media and online shopping recommendations, but only 8% of the UK population are aware of the term, as noted at The Royal Society panel discussion Science Matters – Machine Learning and Artificial intelligence. Recent high profile advances in machine learning, for example AlphaGo, have utilised a technique known as deep learning which predominantly uses deep neural networks, an evolution of artificial neural networks that were first introduced in the 1950s (for more about the history of AI please see: https://skillsmatter.com/skillscasts/5704-lightning-talk).  To all intents and purposes these are black box algorithms as these systems teach themselves and the exact nature of what is learned is unknown, this is known issue where these systems are used for automated decision making.  In addition to this being ethically dubious when these decisions relate to living beings, they may also fall foul of the upcoming General Data Protection Regulations (Rights related to automated decision making and profiling).

Professor Noel Sharkey has stated that we are on the cusp of a robot revolution.  Robots are shifting from the production line, to our homes and service industries. There has been a lot of development of care robots, particularly in Japan, and is an active area of research (e.g. Socially Assistive Robots in Elderly Care: A Systematic Review into Effects and Effectiveness).  The introduction of robots into shared spaces has an impact on society.

A lot has already been said and written about decision making processes of autonomous vehicles and the ethics of the decisions made, including myself (https://blog.swiftkey.com/slam-self-driving-cars-ai-and-mapping-on-the-move/), but there are still a level of uncertainty especially with regards to the law as to who is responsible for these vehicles’ actions.  What is less discussed is the potential impact of the roll-out of these vehicles on wider society; for example 3m jobs in the USA alone may be lost due to autonomous vehicles.

Unlike human beings, AI systems, include robots, do not have a sense of agency (What Is the Sense of Agency and Why Does it Matter?), this can cause difficulties within society as behaviours of robots or autonomous vehicles may be different from expected societal norms, causing a disruption to society and values. It also introduces ambiguity around liability and criminal responsibility.  If an AI does not have an understanding of how its actions are having a negative impact on society, how can it be helped accountable? Alternatively, are the developers or manufacturers then accountable for an AI’s behaviour? This is known as liability diffusion.

If an AI is capable of learning a sense of agency, what is an acceptable time-frame for this learning to take place in and what will be an acceptable level of behaviour and/or error until sufficient learning has taken place?

The one thing that is clear, is that these emerging technologies will be disruptive to all areas of society, and could be considered a Pandora’s box but also has the potential to bring huge benefits to the whole of society. As a result there are a number of organisations that are considering the potential impacts of these technologies; these include the Machine Intelligence Research Institute (https://intelligence.org/) and Centre for the Study of Existential Risk (http://cser.org/), plus organisations such as OpenAI (https://openai.com/about/) that are making these technologies available to all.

For further reading on where we are now and where we are heading I recommend Murray Shanahan’s book ‘The Technological Singularity’ , and Danny Wallace’s documentary  ‘Where’s my robot?‘ .