Accelerating Federated Learning on Edge Devices with Adaptive Pruning

Abstract Federated learning (FL) is an effective way of distributed model training from local data at client devices, which preserves data privacy and reduces communication overhead in coalition environments. A challenge is that nodes in a coalition network usually have much more limited computation power and communication bandwidth compared to server clusters in a datacenter. To overcome this challenge, we propose a novel weight pruning method that adapts the model size during FL, to minimize the cost (e.g., training time) of reaching any target accuracy. The best model size is found using a two-stage process that includes maximizing the approximate empirical risk reduction divided by the cost of one iteration. Our theoretical analysis and experiments with various datasets on Raspberry Pi devices show that: (i) we reduce the training time compared to conventional FL and other pruning-based methods; (ii) the pruned model converges to an accuracy that is very similar to the original model but has a much smaller size, and it is also a lottery ticket of the original model.
Authors
  • Yuang Jiang (Yale)
  • Shiqiang Wang (IBM US)
  • Kin K. Leung (Imperial)
  • Kevin Chan (ARL)
  • Leandros Tassiulas (Yale)
Date Sep-2020
Venue 4th Annual Fall Meeting of the DAIS ITA, 2020