Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning

Abstract Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battlespace, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two — interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to justify their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments.
Authors
  • Richard Tomsett (IBM UK)
  • Amy Widdicombe (UCL)
  • Tianwei Xing (UCLA)
  • Supriyo Chakraborty (IBM US)
  • Simon Julier (UCL)
  • Prudhvi Gurram (ARL)
  • Raghuveer Rao (ARL)
  • Mani Srivastava (UCLA)
Date Jul-2018
Venue 21st International Conference on Information Fusion