Coalition situational understanding via explainable neuro-symbolic reasoning and learning

Abstract Recent years have seen significant advances in artificial intelligence (AI) and machine learning (ML) technologies applicable to coalition situational understanding (CSU). However, state-of-the-art ML techniques based on deep neural networks require large volumes of training data; unfortunately, representative training examples of situations of interest in CSU are usually sparse. Moreover, to be useful, ML-based analytic services must be capable of explaining their outputs. We describe an integrated CSU architecture that combines neural networks with symbolic learning and reasoning to address the problem of sparse training data. We also demonstrate how explainability can be achieved for deep neural networks operating on multimodal sensor feeds. The work focuses on real-time decision making settings at the tactical edge, with both the symbolic and neural network parts of the system --- including the explainabilty approaches --- able to deal with temporal features.
Authors
  • Alun Preece (Cardiff)
  • Dave Braines (IBM UK)
  • Federico Cerutti
  • Jack Furby (Cardiff)
  • Liam Hiley (Cardiff)
  • Lance Kaplan (ARL)
  • Mark Law (Imperial)
  • Alessandra Russo (Imperial)
  • Mani Srivastava (UCLA)
  • Marc Roig Vilamala (Cardiff)
  • Tianwei Xing (UCLA)
Date Apr-2021
Venue Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE DCS, 2021