Investigating Statistical Privacy Frameworks from the Perspective of Hypothesis Testing

Abstract Over the last decade, differential privacy (DP) has emerged as the gold standard of a rigorous and provable privacy framework. However, there are very few practical guidelines on how to apply differential privacy in practice, and a key challenge is how to set an appropriate value for the privacy parameter . In this work, we employ a statistical tool called hypothesis testing for discovering useful and interpretable guidelines for the state-of-the-art privacy-preserving frameworks. We formalize and implement hypothesis testing in terms of an adversary's capability to infer mutually exclusive sensitive information about the input data (such as whether an individual has participated or not) from the output of the privacy-preserving mechanism. We quantify the success of the hypothesis testing using the precisionrecall-relation, which provides an interpretable and natural guideline for practitioners and researchers on selecting . Our key results include a quantitative analysis of how hypothesis testing can guide the choice of the privacy parameter  in an interpretable manner for a differentially private mechanism and its variants. Importantly, our findings show that an adversary's auxiliary information – in the form of prior distribution of the database and correlation across records and time – indeed influences the proper choice of . Finally, we also show how the perspective of hypothesis testing can provide useful insights on the relationships among a broad range of privacy frameworks including differential privacy, Pufferfish privacy, Blowfish privacy, dependent differential privacy, inferential privacy, membership privacy and mutual-information based differential privacy.
Authors
  • Changchang Liu (IBM US)
  • Xi He
  • Thee Chanyaswad
  • Shiqiang Wang (IBM US)
  • Prateek Mittal
Date Jul-2019
Venue 19th Privacy Enhancing Technologies Symposium (PETS 2019)