Fair Transfer Learning with Missing Protected Attributes

Abstract Risk assessment is a growing use for machine learning models. When used in high-stakes applications, especially ones regulated by anti-discrimination laws or governed by societal norms for fairness, it is important to ensure that learned models do not propagate and scale any biases that may exist in training data. In this paper, we add on an additional challenge beyond fairness: unsupervised domain adaptation to covariate shift between a source and target distribution. Motivated by the real-world problem of risk assessment in new markets for health insurance in the United States and mobile money-based loans in East Africa, we provide a precise formulation of the machine learning with covariate shift and score parity problem. Our formulation focuses on situations in which protected attributes are not available in either the source or target domain. We propose two new weighting methods: prevalence-constrained covariate shift (PCCS) which does not require protected attributes in the target domain and target-fair covariate shift (TFCS) which does not require protected attributes in the source domain. We empirically demonstrate their efficacy in two applications.
Authors
  • Amanda Coston (IBM US)
  • Karthikeyan Ramamurthy (IBM US)
  • Dennis Wei (IBM US)
  • Kush Varshney (IBM US)
  • Skyler Speakman
  • Zairah Mustahsan (IBM US)
  • Supriyo Chakraborty (IBM US)
Date Jan-2019
Venue AAAI/ACM Conference on Artificial Intelligence, Ethics and Society