The Implicit Fairness Criterion of Unconstrained Learning

Lydia T. Liu, Max Simchowitz, Moritz Hardt
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4051-4060, 2019.

Abstract

We clarify what fairness guarantees we can and cannot expect to follow from unconstrained machine learning. Specifically, we show that in many settings, unconstrained learning on its own implies group calibration, that is, the outcome variable is conditionally independent of group membership given the score. A lower bound confirms the optimality of our upper bound. Moreover, we prove that as the excess risk of the learned score decreases, the more strongly it violates separation and independence, two other standard fairness criteria. Our results challenge the view that group calibration necessitates an active intervention, suggesting that often we ought to think of it as a byproduct of unconstrained machine learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-liu19f, title = {The Implicit Fairness Criterion of Unconstrained Learning}, author = {Liu, Lydia T. and Simchowitz, Max and Hardt, Moritz}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4051--4060}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/liu19f/liu19f.pdf}, url = {https://proceedings.mlr.press/v97/liu19f.html}, abstract = {We clarify what fairness guarantees we can and cannot expect to follow from unconstrained machine learning. Specifically, we show that in many settings, unconstrained learning on its own implies group calibration, that is, the outcome variable is conditionally independent of group membership given the score. A lower bound confirms the optimality of our upper bound. Moreover, we prove that as the excess risk of the learned score decreases, the more strongly it violates separation and independence, two other standard fairness criteria. Our results challenge the view that group calibration necessitates an active intervention, suggesting that often we ought to think of it as a byproduct of unconstrained machine learning.} }
Endnote
%0 Conference Paper %T The Implicit Fairness Criterion of Unconstrained Learning %A Lydia T. Liu %A Max Simchowitz %A Moritz Hardt %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-liu19f %I PMLR %P 4051--4060 %U https://proceedings.mlr.press/v97/liu19f.html %V 97 %X We clarify what fairness guarantees we can and cannot expect to follow from unconstrained machine learning. Specifically, we show that in many settings, unconstrained learning on its own implies group calibration, that is, the outcome variable is conditionally independent of group membership given the score. A lower bound confirms the optimality of our upper bound. Moreover, we prove that as the excess risk of the learned score decreases, the more strongly it violates separation and independence, two other standard fairness criteria. Our results challenge the view that group calibration necessitates an active intervention, suggesting that often we ought to think of it as a byproduct of unconstrained machine learning.
APA
Liu, L.T., Simchowitz, M. & Hardt, M.. (2019). The Implicit Fairness Criterion of Unconstrained Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4051-4060 Available from https://proceedings.mlr.press/v97/liu19f.html.

Related Material