Abstract |
Recent high-profile cases have highlighted the need for artificial intelligence (AI) systems to explain their outputs, to assure users that these systems are functioning appropriately, including being free from harmful biases. There is a large and active multidisciplinary research and development community attempting to address this problem, but one issue that has been neglected is consideration of the audiences for AI system explanations. This talk looked at different roles that humans play in relation to AI systems, and how explanations need to be crafted differently for different kinds of recipients. |