Abstract |
Federated learning is a way of decentralized model training/adaptation with limited information exchange among local nodes. In military coalition environments, this has the benefit of preserving sensitive information of coalition members while still being able to collaborate and learn an analytics model for a common goal. Since coalition operations can have rapid dynamic changes, it is important that federated learning in this setting is fast and consumes a small amount of computation and communication resources. In this demonstration, we show the benefits of our recent technical developments, namely model pruning and accelerated gradient descent, for efficient federated learning. We consider the scenario where two coalition members have their own local datasets that can be possibly biased towards certain ways, and illustrate the improvement of model training time using our proposed method. We will also illustrate that, in addition to reducing the training time, model pruning can also decrease the inference time on Raspberry Pi devices. |