Fair Regression: Quantitative Definitions and Reduction-Based Algorithms

Alekh Agarwal, Miroslav Dudik, Zhiwei Steven Wu
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:120-129, 2019.

Abstract

In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems fair regression. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitz-continuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness–accuracy frontiers on several standard datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-agarwal19d, title = {Fair Regression: Quantitative Definitions and Reduction-Based Algorithms}, author = {Agarwal, Alekh and Dudik, Miroslav and Wu, Zhiwei Steven}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {120--129}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/agarwal19d/agarwal19d.pdf}, url = {https://proceedings.mlr.press/v97/agarwal19d.html}, abstract = {In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems fair regression. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitz-continuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness–accuracy frontiers on several standard datasets.} }
Endnote
%0 Conference Paper %T Fair Regression: Quantitative Definitions and Reduction-Based Algorithms %A Alekh Agarwal %A Miroslav Dudik %A Zhiwei Steven Wu %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-agarwal19d %I PMLR %P 120--129 %U https://proceedings.mlr.press/v97/agarwal19d.html %V 97 %X In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems fair regression. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitz-continuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness–accuracy frontiers on several standard datasets.
APA
Agarwal, A., Dudik, M. & Wu, Z.S.. (2019). Fair Regression: Quantitative Definitions and Reduction-Based Algorithms. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:120-129 Available from https://proceedings.mlr.press/v97/agarwal19d.html.

Related Material