On the Performance Tradeoffs of Federated Learning in Resource Constrained Environments

Abstract In this paper, we study how to speed up federated learning in resource-constrained environments. Our work is motivated by scenarios where agents (e.g., soldiers from different coalitions) want to update a global model as soon as new data is collected to make better predictions. Previous work proposed techniques to speed up the training that trade speed for accuracy. In this work, we tackle the problem from another perspective and investigate how to train faster without sacrificing performance. The paper’s contributions are to quantify the amount of gradient compression required to do not loose training performance in accelerated methods, and to characterize the model accuracy in terms of training time and the number of rounds. We also present preliminary experimental results that illustrate how a simple accelerated gradient compression scheme improves over standard gradient-descent.
Authors
  • Victor Valls (Yale)
  • Shiqiang Wang (IBM US)
  • Kevin Chan (ARL)
  • Kin Leung (Imperial)
  • Leandros Tassiulas (Yale)
Date Sep-2020
Venue 4th Annual Fall Meeting of the DAIS ITA, 2020