Fairness Without Demographics in Repeated Loss Minimization

Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1929-1938, 2018.

Abstract

Machine learning models (e.g., speech recognizers) trained on average loss suffer from representation disparity—minority groups (e.g., non-native speakers) carry less weight in the training objective, and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even turn initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-hashimoto18a, title = {Fairness Without Demographics in Repeated Loss Minimization}, author = {Hashimoto, Tatsunori and Srivastava, Megha and Namkoong, Hongseok and Liang, Percy}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1929--1938}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/hashimoto18a/hashimoto18a.pdf}, url = {https://proceedings.mlr.press/v80/hashimoto18a.html}, abstract = {Machine learning models (e.g., speech recognizers) trained on average loss suffer from representation disparity—minority groups (e.g., non-native speakers) carry less weight in the training objective, and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even turn initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.} }
Endnote
%0 Conference Paper %T Fairness Without Demographics in Repeated Loss Minimization %A Tatsunori Hashimoto %A Megha Srivastava %A Hongseok Namkoong %A Percy Liang %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-hashimoto18a %I PMLR %P 1929--1938 %U https://proceedings.mlr.press/v80/hashimoto18a.html %V 80 %X Machine learning models (e.g., speech recognizers) trained on average loss suffer from representation disparity—minority groups (e.g., non-native speakers) carry less weight in the training objective, and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even turn initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.
APA
Hashimoto, T., Srivastava, M., Namkoong, H. & Liang, P.. (2018). Fairness Without Demographics in Repeated Loss Minimization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1929-1938 Available from https://proceedings.mlr.press/v80/hashimoto18a.html.

Related Material