Distributed machine learning at edge nodes

Abstract A training process of a machine learning model is executed at the edge node for a number of iterations to generate a model parameter based at least in part on a local dataset and a global model parameter. A resource parameter set indicative of resources available at the edge node is estimated. The model parameter and the resource parameter set are sent to a synchronization node. Updates to the global model parameter and the number of iterations are received from the synchronization node based at least in part on the model parameter and the resource parameter set of edge nodes. The training process of the machine learning model is repeated at the edge node to determine an update to the model parameter based at least in part on the local dataset and updates to the global model parameter and the number of iterations from the synchronization node.
Authors
  • Shiqiang Wang (IBM US)
  • Tiffany Tuor (Imperial)
  • Theodoros Salonidis (IBM US)
  • Christian Makaya (IBM US)
  • Bongjun Ko (IBM US)
Date Oct-2019
Venue U.S. Patent Application 15/952,625, filed October 17, 2019