An Analysis of Reliability Using LIME with Deep Learning Models

Abstract As machine learning solutions become more commonplace it is essential that human users are able to trust the outputs of these systems. Whilst many machine learning systems show great potential in terms of their ability to perform classification or prediction tasks, they are often undermined by their inability to provide explanations as to why the proposed outcome was chosen. These black box systems are inherently unable to provide such explanations due to their complex internal composition so novel techniques to extract or generate such explanations are needed. Much research is now focused around identifying these explainability techniques and functions for machine learning systems. Tools and frameworks such as LIME (Local Interpretable Model-Agnostic Explanations) are now available to be used to provide explanations and to check whether the machine learning model is actually detecting relevant features. Due to the approach taken by tools such as LIME there appears to be inherent uncertainty, with potentially different (and often conflicting) explanations being generated for any given machine learning outcome. In this paper we investigate LIME in a simple image classification task and asses the consistency of the explanations generated. Against this baseline we then implement a number of simple algorithms to investigate whether the aggregation of multiple explanations to provide a single computed summary explanation can improve the stability (and therefore usefulness) of the explanations. This suggests that some of the apparent uncertainty experienced by human users is due to the way the results are visualized.
Authors
  • Mitchell Stiffler
  • Adam Hudler
  • Eunjin Lee (IBM UK)
  • Dave Braines (IBM UK)
  • David Mott (IBM UK)
  • Dan Harborne (Cardiff)
Date Sep-2018
Venue 2nd Annual Fall Meeting of the DAIS ITA, 2018
Variants