Abstract |
Convolutional Neural Networks (CNNs) have high predictive performance in part due to the compositional nature of learned features beyond low level textures. Features from a variety of data modalities can be learned, making CNNs an ideal platform for supporting situational understanding in a coalition context. However, prior work has shown that CNNs tend to group similar high-level features from unrelated classes together. This contributes to the argument that CNNs are not interpretable, and can lead to low confidence in a system. A decision-maker using a CNN-based asset (possibly shared by a different coalition partner) requires confidence that the asset will produce results in a predictable way. This paper presents a technique for conditioning a CNN’s learned features in a way that groups high level features according to semantic hierarchical concepts. This foundational work aims to achieve inherently interpretable neural networks which balance predictive performance with greater robustness due to the semantic coherence of their feature spaces. |
Authors |
- Harrison Taylor (Cardiff)
- Richard Tomsett (IBM UK)
- Prudhvi Gurram (ARL)
- Supriyo Chakraborty (IBM US)
- Yulia Hicks (Cardiff)
- David Marshall (Cardiff)
- Alun Preece (Cardiff)
|
Date |
Sep-2020 |
Venue |
4th Annual Fall Meeting of the DAIS ITA, 2020 |
|