Achieving Useful AI Explanations in a High-Tempo Complex Environment

Abstract Based on current capabilities, many Machine Learning techniques are often inscrutable and they can be hard for users to trust because they lack e ective means of generating explanations for their outputs. There is much research and development investigating this area, with a wide variety of proposed explanation techniques for AI/ML across a variety of data modalities. In this paper we investigate which modality of explanation to choose for a particular user and task, taking into account relevant contextual information such as the time available to them, their level of skill, what level of access they have to the data and sensors in question, and the device that they are using. Additional environmental factors such as available bandwidth, currently usable sensors and services are also able to be accounted for. The explanation techniques that we are investigating range across transparent and post-hoc mechanisms and form part of a conversation with the user in which the explanation (and therefore human understanding of the AI decision) can be ascertained through dialogue with the system. Our research is exploring generic techniques that can be used to underpin useful explanations in a range of modalities in the context of AI/ML services that operate on multi-sensor data in a distributed, dynamic, contested and adversarial setting. We defi ne a meta-model for representing this information and through a series of examples show how this approach can be used to support conversational explanation across a range of situations, datasets and modalities.
Authors
  • Dave Braines (IBM UK)
  • Alun Preece (Cardiff)
  • Dan Harborne (Cardiff)
Date Apr-2019
Venue SPIE - Defense + Commercial Sensing 2019