Abstract |
Several researchers have argued that a machine learning system's interpretability should be de- fined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable. We describe a model intended to help answer this question, by identify- ing different roles that agents can fulfill in relation to the machine learning system. We illustrate the use of our model in a variety of scenarios, ex- ploring how an agent's role influences its goals, and the implications for defining interpretability. Finally, we make suggestions for how our model could be useful to interpretability researchers, sys- tem developers, and regulatory bodies auditing machine learning systems. |