Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

Abstract Several researchers have argued that a machine learning system's interpretability should be de- fined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable. We describe a model intended to help answer this question, by identify- ing different roles that agents can fulfill in relation to the machine learning system. We illustrate the use of our model in a variety of scenarios, ex- ploring how an agent's role influences its goals, and the implications for defining interpretability. Finally, we make suggestions for how our model could be useful to interpretability researchers, sys- tem developers, and regulatory bodies auditing machine learning systems.
Authors
  • Richard Tomsett (IBM UK)
  • Dave Braines (IBM UK)
  • Dan Harborne (Cardiff)
  • Alun Preece (Cardiff)
  • Supriyo Chakraborty (IBM US)
Date Sep-2018
Venue 2nd Annual Fall Meeting of the DAIS ITA, 2018
Variants