Blind Pareto Fairness and Subgroup Robustness

Natalia L Martinez, Martin A Bertran, Afroditi Papadaki, Miguel Rodrigues, Guillermo Sapiro
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7492-7501, 2021.

Abstract

Much of the work in the field of group fairness addresses disparities between predefined groups based on protected features such as gender, age, and race, which need to be available at train, and often also at test, time. These approaches are static and retrospective, since algorithms designed to protect groups identified a priori cannot anticipate and protect the needs of different at-risk groups in the future. In this work we analyze the space of solutions for worst-case fairness beyond demographics, and propose Blind Pareto Fairness (BPF), a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service. BPF addresses fairness beyond demographics, that is, it does not rely on predefined notions of at-risk groups, neither at train nor at test time. Our experimental results show that the proposed framework improves worst-case risk in multiple standard datasets, while simultaneously providing better levels of service for the remaining population. The code is available at github.com/natalialmg/BlindParetoFairness

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-martinez21a, title = {Blind Pareto Fairness and Subgroup Robustness}, author = {Martinez, Natalia L and Bertran, Martin A and Papadaki, Afroditi and Rodrigues, Miguel and Sapiro, Guillermo}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7492--7501}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/martinez21a/martinez21a.pdf}, url = {https://proceedings.mlr.press/v139/martinez21a.html}, abstract = {Much of the work in the field of group fairness addresses disparities between predefined groups based on protected features such as gender, age, and race, which need to be available at train, and often also at test, time. These approaches are static and retrospective, since algorithms designed to protect groups identified a priori cannot anticipate and protect the needs of different at-risk groups in the future. In this work we analyze the space of solutions for worst-case fairness beyond demographics, and propose Blind Pareto Fairness (BPF), a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service. BPF addresses fairness beyond demographics, that is, it does not rely on predefined notions of at-risk groups, neither at train nor at test time. Our experimental results show that the proposed framework improves worst-case risk in multiple standard datasets, while simultaneously providing better levels of service for the remaining population. The code is available at github.com/natalialmg/BlindParetoFairness} }
Endnote
%0 Conference Paper %T Blind Pareto Fairness and Subgroup Robustness %A Natalia L Martinez %A Martin A Bertran %A Afroditi Papadaki %A Miguel Rodrigues %A Guillermo Sapiro %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-martinez21a %I PMLR %P 7492--7501 %U https://proceedings.mlr.press/v139/martinez21a.html %V 139 %X Much of the work in the field of group fairness addresses disparities between predefined groups based on protected features such as gender, age, and race, which need to be available at train, and often also at test, time. These approaches are static and retrospective, since algorithms designed to protect groups identified a priori cannot anticipate and protect the needs of different at-risk groups in the future. In this work we analyze the space of solutions for worst-case fairness beyond demographics, and propose Blind Pareto Fairness (BPF), a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service. BPF addresses fairness beyond demographics, that is, it does not rely on predefined notions of at-risk groups, neither at train nor at test time. Our experimental results show that the proposed framework improves worst-case risk in multiple standard datasets, while simultaneously providing better levels of service for the remaining population. The code is available at github.com/natalialmg/BlindParetoFairness
APA
Martinez, N.L., Bertran, M.A., Papadaki, A., Rodrigues, M. & Sapiro, G.. (2021). Blind Pareto Fairness and Subgroup Robustness. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7492-7501 Available from https://proceedings.mlr.press/v139/martinez21a.html.

Related Material