Sharing Models or Coresets: A Study based on Membership Inference Attack

Abstract Distributed machine learning generally aims at training a global model based on distributed data without collecting all the data to a centralized location, where two different approaches have been proposed: collecting and aggregating lo- cal models (federated learning) and collecting and training over representative data summaries (coreset). While each approach preserves data privacy to some extent thanks to not sharing the raw data, the exact extent of protection is unclear under sophisticated attacks that try to infer the raw data from the shared information. We present the first comparison between the two approaches in terms of target model accuracy, communication cost, and data privacy, where the last is measured by the accuracy of a state-of-the-art attack strat- egy called the membership inference attack. Our experiments quantify the accuracy-privacy-cost tradeoff of each approach, and reveal a nontrivial comparison that can be used to guide the design of model training processes.
Authors
  • Hanlin Lu (PSU)
  • Changchang Liu (IBM US)
  • Ting He (PSU)
  • Shiqiang Wang (IBM US)
  • Kevin Chan (ARL)
Date Sep-2020
Venue 4th Annual Fall Meeting of the DAIS ITA, 2020
Variants
  • <a href="\doc-6051\”>doc-6051</a>