Provisioning Robust and Interpretable AI/ML-based Service Bundles

Abstract Coalition operations environments are characterised by the need to share intelligence, surveillance and reconnaissance services. Increasingly, such services are based on artificial intelligence (AI) and machine learning (ML) technologies. Two key issues in the exploitation of AI/ML services are robustness and interpretability. Employing a diverse portfolio of services can make a system robust to 'unknown unknowns'. Interpretability — the need for services to offer explanation facilities to engender user trust — can be addressed by a variety of methods to generate either transparent or post hoc explanations according to users' requirements. This paper shows how a service-provisioning framework for coalition operations can be extended to address specific requirements for robustness and interpretability, allowing automatic selection of service bundles for intelligence, surveillance and reconnaissance tasks. The approach is demonstrated in a case study on traffic monitoring featuring a diverse set of AI/ML services based on deep neural networks and heuristic reasoning approaches.
Authors
  • Alun Preece (Cardiff)
  • Dan Harborne (Cardiff)
  • Ramya Raghavendra (IBM US)
  • Richard Tomsett (IBM UK)
  • Dave Braines (IBM UK)
Date Oct-2018
Venue IEEE Military Communications Conference (MILCOM) 2018