Exploration of the Effect of Task and User Role on the Evaluation of Interpretability Techniques

Abstract As the effectiveness and accuracy of machine learning techniques has increased over time, the role they play in offering decision support in many domains also has increased. In most of these domains, the ability to understand the reasoning behind the outputs offered by these machine learning based systems is a crucial part of the validity of adopting them in real world scenarios. In recent work, we have proposed that the nature of the task and the role of the user within that task has an impact on which forms of explanation offered by machine learning systems provide the most relevant insights to the user. This demo is a step towards the exploration of this task- and user role- dependent impact on the value of different explanation types. By presenting an interface that allows the user to swap and configure the dataset, the model and the explanations offered, we create a platform to build intuition for the properties of various interpretability techniques within the scope of different tasks.
Authors
  • Dan Harborne (Cardiff)
  • Alun Preece (Cardiff)
  • Harrison Taylor (Cardiff)
  • Liam Hiley (Cardiff)
  • Chris Willis (BAE)
  • Dave Braines (IBM UK)
  • Richard Tomsett (IBM UK)
  • Supriyo Chakraborty (IBM US)
  • Simon Julier (UCL)
  • Amy Widdicombe (UCL)
  • Moustafa Alzantot (UCLA)
Date Sep-2018
Venue 2nd Annual Fall Meeting of the DAIS ITA, 2018