Asking Why in AI: Explainability of Intelligent Systems—Perspectives and Challenges

Abstract Recent rapid progress in machine learning (ML), particularly so-called 'deep learning', has led to a resurgence in interest in explainability of artificial intelligence (AI) systems, reviving an area of research dating back to the 1970s. The aim of this article is to view current issues concerningML-based AI systems from the perspective of classical AI, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explainable AI today.
Authors
  • Alun Preece (Cardiff)
Date Apr-2018
Venue Intelligent Systems in Accounting, Finance and Management [link]