X Close

Exploring Human-centred Explainable Artificial Intelligence (HExAI)

Home

Menu

Workshop on Human-centred Explainable Artificial Intelligence

5th July 2019, 10am to 4pm, University College London
This event has now been held – see the HeXAI leaflet for a report of the event.

The story of the rise of Artificial Intelligence has gained popular currency in recent years and there is increasing recognition that decisions with impact on all aspects of our lives are being taken using data filtered and interpreted by non-human agents. The idea that human judgement is being over-ridden by machine learning is proving to be an unsettling one. For example, some reporting on the recent crashes involving Boeing 737 Max aircraft has painted a picture of human pilots battling with the automated Manoeuvring Characteristics Augmentation System (MCAS).  Then again, in Europe, new and wide-ranging data protection regulations have recently been introduced which cover automated decision-making and profiling, and which some have argued introduce a ‘right to an explanation’ [1].

Explainable Artificial Intelligence or XAI is primarily concerned with enabling explanations, and in some cases justifications, of artificially intelligent systems in order to increase trust and confidence in those systems amongst their stakeholders. Yet XAI is still to be fully discussed from the human point of view. This free workshop, funded by UCL Grand Challenges, seeks to extend the existing research agenda towards giving greater attention to this human point of view and to develop the idea of human-centred explainable artificial intelligence (HExAI). The workshop organisers are: Professor Yvonne Rogers, UCL; Dr Jenny Bunn, UCL; Mark Bell and Dr Jo Pugh, The National Archives.

It has been noted that ‘leaving decisions about what constitutes a good explanation of complex decision-making models to the experts who understand these models the best is likely to result in failure in many cases’ [2]. For this reason, this workshop will deliberately establish an environment in which dialogue is privileged over dissemination, process over product. Interdisciplinary perspectives, from philosophy to computer science are welcomed, and participation is sought from both those who do and those who do not consider themselves to be experts in artificial intelligence.

 

  1. Information Commissioner’s Office. 2018. Automated decision-making and profiling. Retrieved from https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/
  2. Tim Miller, Piers Howe and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum”. In Proceedings of the IJCAI-17 (International Joint Conference on Artificial Intelligence) Workshop on Explainable AI (XAI). Retrieved from http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf