Conversational Explanations: Explainable AI through human-machine conversation

Abstract Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this tutorial we will explore the space of Explainable AI, but with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily.
Authors
  • Dave Braines (IBM UK)
Date Apr-2019
Venue 2019 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)