Stochastic Differentially Private and Fair Learning

Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, PMLR 214:86-119, 2023.

Abstract

Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v214-lowy23a, title = {Stochastic Differentially Private and Fair Learning}, author = {Lowy, Andrew and Gupta, Devansh and Razaviyayn, Meisam}, booktitle = {Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy}, pages = {86--119}, year = {2023}, editor = {Dieng, Awa and Rateike, Miriam and Farnadi, Golnoosh and Fioretto, Ferdinando and Kusner, Matt and Schrouff, Jessica}, volume = {214}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v214/lowy23a/lowy23a.pdf}, url = {https://proceedings.mlr.press/v214/lowy23a.html}, abstract = {Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.} }
Endnote
%0 Conference Paper %T Stochastic Differentially Private and Fair Learning %A Andrew Lowy %A Devansh Gupta %A Meisam Razaviyayn %B Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy %C Proceedings of Machine Learning Research %D 2023 %E Awa Dieng %E Miriam Rateike %E Golnoosh Farnadi %E Ferdinando Fioretto %E Matt Kusner %E Jessica Schrouff %F pmlr-v214-lowy23a %I PMLR %P 86--119 %U https://proceedings.mlr.press/v214/lowy23a.html %V 214 %X Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.
APA
Lowy, A., Gupta, D. & Razaviyayn, M.. (2023). Stochastic Differentially Private and Fair Learning. Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, in Proceedings of Machine Learning Research 214:86-119 Available from https://proceedings.mlr.press/v214/lowy23a.html.

Related Material