Augmenting saliency maps with uncertainty

Abstract Explanations are generated to accompany a model decision indicating features of the input data that were the most relevant towards the model decision. Explanations are important not only for understanding the decisions of deep neural network, which in spite of their their huge success in multiple domains operate largely as abstract black boxes, but also for other model classes such as gradient boosted decision trees. In this work, we propose methods, using both Bayesian and Non-Bayesian approaches to augment explanations with uncertainty scores. We believe that uncertainty augmented saliency maps can help in better calibration of the trust between human analyst and the machine learning models.
Authors
  • Supriyo Chakraborty (IBM US)
  • Prudhvi Gurram
  • Franck Le (IBM US)
  • Lance Kaplan (ARL)
  • Richard Tomsett (IBM UK)
Date Apr-2021
Venue Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE DCS, 2021