Too Relaxed to Be Fair

Michael Lohaus, Michael Perrot, Ulrike Von Luxburg
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6360-6369, 2020.

Abstract

We address the problem of classification under fairness constraints. Given a notion of fairness, the goal is to learn a classifier that is not discriminatory against a group of individuals. In the literature, this problem is often formulated as a constrained optimization problem and solved using relaxations of the fairness constraints. We show that many existing relaxations are unsatisfactory: even if a model satisfies the relaxed constraint, it can be surprisingly unfair. We propose a principled framework to solve this problem. This new approach uses a strongly convex formulation and comes with theoretical guarantees on the fairness of its solution. In practice, we show that this method gives promising results on real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-lohaus20a, title = {Too Relaxed to Be Fair}, author = {Lohaus, Michael and Perrot, Michael and Luxburg, Ulrike Von}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6360--6369}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/lohaus20a/lohaus20a.pdf}, url = {https://proceedings.mlr.press/v119/lohaus20a.html}, abstract = {We address the problem of classification under fairness constraints. Given a notion of fairness, the goal is to learn a classifier that is not discriminatory against a group of individuals. In the literature, this problem is often formulated as a constrained optimization problem and solved using relaxations of the fairness constraints. We show that many existing relaxations are unsatisfactory: even if a model satisfies the relaxed constraint, it can be surprisingly unfair. We propose a principled framework to solve this problem. This new approach uses a strongly convex formulation and comes with theoretical guarantees on the fairness of its solution. In practice, we show that this method gives promising results on real data.} }
Endnote
%0 Conference Paper %T Too Relaxed to Be Fair %A Michael Lohaus %A Michael Perrot %A Ulrike Von Luxburg %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-lohaus20a %I PMLR %P 6360--6369 %U https://proceedings.mlr.press/v119/lohaus20a.html %V 119 %X We address the problem of classification under fairness constraints. Given a notion of fairness, the goal is to learn a classifier that is not discriminatory against a group of individuals. In the literature, this problem is often formulated as a constrained optimization problem and solved using relaxations of the fairness constraints. We show that many existing relaxations are unsatisfactory: even if a model satisfies the relaxed constraint, it can be surprisingly unfair. We propose a principled framework to solve this problem. This new approach uses a strongly convex formulation and comes with theoretical guarantees on the fairness of its solution. In practice, we show that this method gives promising results on real data.
APA
Lohaus, M., Perrot, M. & Luxburg, U.V.. (2020). Too Relaxed to Be Fair. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6360-6369 Available from https://proceedings.mlr.press/v119/lohaus20a.html.

Related Material