Nonconvex Optimization for Regression with Fairness Constraints

Junpei Komiyama, Akiko Takeda, Junya Honda, Hajime Shimao
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2737-2746, 2018.

Abstract

The unfairness of a regressor is evaluated by measuring the correlation between the estimator and the sensitive attribute (e.g., race, gender, age), and the coefficient of determination (CoD) is a natural extension of the correlation coefficient when more than one sensitive attribute exists. As is well known, there is a trade-off between fairness and accuracy of a regressor, which implies a perfectly fair optimizer does not always yield a useful prediction. Taking this into consideration, we optimize the accuracy of the estimation subject to a user-defined level of fairness. However, a fairness level as a constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer. Despite such nonconvexity, we show an exact solution is available by using tools of global optimization theory. Furthermore, we propose a nonlinear extension of the method by kernel representation. Unlike most of existing fairness-aware machine learning methods, our method allows us to deal with numeric and multiple sensitive attributes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-komiyama18a, title = {Nonconvex Optimization for Regression with Fairness Constraints}, author = {Komiyama, Junpei and Takeda, Akiko and Honda, Junya and Shimao, Hajime}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2737--2746}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/komiyama18a/komiyama18a.pdf}, url = {https://proceedings.mlr.press/v80/komiyama18a.html}, abstract = {The unfairness of a regressor is evaluated by measuring the correlation between the estimator and the sensitive attribute (e.g., race, gender, age), and the coefficient of determination (CoD) is a natural extension of the correlation coefficient when more than one sensitive attribute exists. As is well known, there is a trade-off between fairness and accuracy of a regressor, which implies a perfectly fair optimizer does not always yield a useful prediction. Taking this into consideration, we optimize the accuracy of the estimation subject to a user-defined level of fairness. However, a fairness level as a constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer. Despite such nonconvexity, we show an exact solution is available by using tools of global optimization theory. Furthermore, we propose a nonlinear extension of the method by kernel representation. Unlike most of existing fairness-aware machine learning methods, our method allows us to deal with numeric and multiple sensitive attributes.} }
Endnote
%0 Conference Paper %T Nonconvex Optimization for Regression with Fairness Constraints %A Junpei Komiyama %A Akiko Takeda %A Junya Honda %A Hajime Shimao %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-komiyama18a %I PMLR %P 2737--2746 %U https://proceedings.mlr.press/v80/komiyama18a.html %V 80 %X The unfairness of a regressor is evaluated by measuring the correlation between the estimator and the sensitive attribute (e.g., race, gender, age), and the coefficient of determination (CoD) is a natural extension of the correlation coefficient when more than one sensitive attribute exists. As is well known, there is a trade-off between fairness and accuracy of a regressor, which implies a perfectly fair optimizer does not always yield a useful prediction. Taking this into consideration, we optimize the accuracy of the estimation subject to a user-defined level of fairness. However, a fairness level as a constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer. Despite such nonconvexity, we show an exact solution is available by using tools of global optimization theory. Furthermore, we propose a nonlinear extension of the method by kernel representation. Unlike most of existing fairness-aware machine learning methods, our method allows us to deal with numeric and multiple sensitive attributes.
APA
Komiyama, J., Takeda, A., Honda, J. & Shimao, H.. (2018). Nonconvex Optimization for Regression with Fairness Constraints. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2737-2746 Available from https://proceedings.mlr.press/v80/komiyama18a.html.

Related Material