Efficient Continual Learning Using Bayesian Approaches

Abstract Dynamic and uncertain changes frequently occur in coalition operations. It is important to rapidly adapt analytics services to such changes to deliver accurate situational awareness to soldiers, which requires the capability of adapting analytics models to newly received data during real-time operation. In this paper, we focus on this continual learning problem. We propose a method for models to learn on new data while minimally forgetting what it has learned from old data before. This is chal- lenging due to the “catastrophic forgetting” phenomenon of many learning problems such as the training of deep neural networks. With the goal of continually learning (training) a classifier, our method uses Bayesian neural networks as the classifier model, which has theoretical properties that are favorable for continual learning compared to standard neural networks. We analyze these properties theoretically, based on which devise an algorithm that determines when and how to save, sample, and merge models with given computation and communication resource constraints. Empirical results show that our method outperforms state-of- the-art continual learning techniques. We also discuss how our technique can be extended to decentralized learning scenarios such as federated learning.
Authors
  • Tiffany Tuor (Imperial)
  • Shiqiang Wang (IBM US)
  • Kin Leung (Imperial)
Date Sep-2020
Venue 4th Annual Fall Meeting of the DAIS ITA, 2020