Reasoning and Learning Services for Coalition Situational Understanding

Abstract Situational understanding requires an ability to assess the current situation and anticipate future situations, requiring both pattern recognition and inference. A coalition involves multiple agencies sharing information and analytics. This paper considers how to harness distributed information sources, including multimodal sensors, together with machine learning and reasoning services, to perform situational understanding in a coalition context. To exemplify the approach we focus on a technology integration experiment in which multimodal data | including video and still imagery, geospatial and weather data - is processed and fused in a service-oriented architecture by heterogeneous pattern recognition and inference components. We show how the architecture: (i) provides awareness of the current situation and prediction of future states, (ii) is robust to individual service failure, (iii) supports the generation of why explanations for human analysts (including from components based on black box deep neural networks which pose particular challenges to explanation generation), and (iv) allows for the imposition of information sharing constraints in a coalition context where there is varying levels of trust between partner agencies.
Authors
  • Dan Harborne (Cardiff)
  • Ramya Raghavendra (IBM US)
  • Chris Willis (BAE)
  • Supriyo Chakraborty (IBM US)
  • Pranita Dewan (IBM US)
  • Mudhakar Srivatsa (IBM US)
  • Richard Tomsett (IBM UK)
  • Alun Preece (Cardiff)
Date Apr-2018
Venue SPIE - Defense + Commercial Sensing 2018