Explainable to whom? Why audience matters when building AI systems that can explain themselves

Abstract Recent high-profile cases have highlighted the need for artificial intelligence (AI) systems to explain their outputs, to assure users that these systems are functioning appropriately, including being free from harmful biases. There is a large and active multidisciplinary research and development community attempting to address this problem, but one issue that has been neglected is consideration of the audiences for AI system explanations. This talk looked at different roles that humans play in relation to AI systems, and how explanations need to be crafted differently for different kinds of recipients.
Authors
  • Alun Preece (Cardiff)
Date Mar-2021
Venue Wales Institute of Social and Economic Research and Data, and Department for Work and Pensions Areas of Research Interest (ARI) Workshop