X Close

Exploring Human-centred Explainable Artificial Intelligence (HExAI)

Home

Menu

An Archivist’s Perspective on Explainable Artificial Intelligence

By Jenny J Bunn, on 1 October 2018

One definition of explainable artificial intelligence (XAI) that I quite like is from a previous workshop on the topic  (held in 2017), that defines it as ‘the challenge of shedding light on opaque machine learning (ML) models in contexts for which transparency is important’.[1] I like this definition because it allows me a way into the topic. I cannot claim to know much about machine learning, but I do know about operating in contexts for which transparency is important. As an archivist and records manager my focus is, and long has been, on supporting and maintaining transparency (and its corollaries; accountability and trust). Opacity is not a quality that is unique to machine learning (ML) models and our own individual reasoning and actions, as well as those of large bureaucratic organisms, such as governments and businesses, can be equally opaque. It is the opacity of this last kind of organisation, which I have been particularly involved with and my practice has developed ways of maintaining the potential for opening up opaque systems of this kind to examination. These ways rely on an idea of authentic, reliable and useable records, as, in part at least, discrete things, which can be stored and accessed over time. My concern and that of others who share my practice, is that, whereas we used to know what such records needed to look like, how we could conceive of and manage them, we now realise that changes in technology will not allow us to continue to conceive of them in the same way. As The National Archives recently put it; ‘The uncertain and unbounded nature of new forms of records, such as those derived from machine learning systems, is causing us to rethink how we preserve evidence of these systems, and what is the “public record” that we are preserving’.[2]

The main questions I have with regards to explainable artificial intelligence are framed by this background, as I am asking things like;

  • where and what is the record in all this?
  • how do I need to adapt my practice to ensure that I can continue to support transparency, accountability and trust?
  • can I still do the above by thinking in terms of records, or do I need to think in different terms?

Looking (briefly) at the existing research literature on the topic of XAI to try to work out what those different terms might be, it is heartening to see a common metaphor being employed – that of the box. (Archivists and records managers know all about boxes!) Different approaches to XAI are often characterised in terms of being either black box, white box or even gray box. For example, Biran and Cotton distinguish between prediction interpretation and justification approaches, which seek to interpret or justify ‘otherwise black-box models’, and approaches, which aim to produce ‘models that are inherently interpretable’.[3] Then again, Brinton discusses an approach called ‘Gray Box Decision Characterization’ that ‘lies in between a black-box and white-box approach’.[4] The point at issue within this categorisation appears to be the extent to which the inner workings inside the box are open for examination either during and/or after the fact. What the fact is though seems to vary. Sometimes the fact seems to a specific decision or recommendation, and the question becomes whether the dots from input to output can be followed, or whether the gap between them must rather be guessed at, retrospectively reconstructed, justified or explained? Sometimes however, the fact seems to be ‘the model’ and the question becomes whether or not that computation, or learning, or cognition, can be trusted as if it were an agency, an intelligence or consciousness similar to our own (which is, let’s face it, often opaque even to ourselves).

That I see this particular distinction also betrays my background. Increasingly over time, thinking about recordkeeping has expanded from a narrow focus on transparency in terms of examining past decisions or actions in order to establish the facts of who did or decided what and on what basis. Instead, it has broadened to consider the way in which examining past decisions and actions can also establish the fact of who (and that) we think we are, our sense (both individually and collectively) of identity. Attempting to unify thinking across these two levels is something that archivists and records managers are used to doing, which is another reason why they may prove useful in putting XAI into practice. Certainly it is they who are going to be at the front line of satisfying the new and wide-ranging European Union data protection regulations which cover automated decision-making and profiling, and which, some have argued, introduce a ‘right to an explanation’.[5]

A recent volume on Trust, Computing and Society, noted a tendency for studies of trust to fail to bring in a time perspective and to consider ‘the ways in which human behavior – rational or otherwise – is embedded in practices extended in time’.[6] If there is to be a more human-centred agenda for explainable artificial intelligence, it must take this question of extension through time into account and that means it should concern itself not just with explanations, interpretations or justifications, but also with records.

[1] Proceedings of IJCAI-17 (International Joint Conference on Artificial Intelligence) Workshop on Explainable AI (XAI). Retrieved from http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf

[2] The National Archives. 2018. Rethinking the record. Retrieved from http://www.nationalarchives.gov.uk/about/our-research-and-academic-collaboration/our-research-and-people/our-research-priorities/rethinking-the-record/

[3] Or Biran and Courtenay Cotton. 2017. Explanation and Justification in Machine Learning: A Survey. In Proceedings of IJCAI-17 (International Joint Conference on Artificial Intelligence) Workshop on Explainable AI (XAI). Retrieved from http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf

[4] Chris Brinton. 2017. A Framework for Explanation of Machine Learning Decisions. In Proceedings of IJCAI-17 (International Joint Conference on Artificial Intelligence) Workshop on Explainable AI (XAI). Retrieved from http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf

[5] Information Commissioner’s Office. 2018. Automated decision-making and profiling. Retrieved from https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/

[6] Olli Lagerspetz. 2014. The Worry about Trust. In Richard Harper (Ed.). 2014. Trust, Computing and Society. Cambridge University Press, New York, 120-143.

 

Leave a Reply