Active Sampling for Min-Max Fairness

Jacob D Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:53-65, 2022.

Abstract

We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization. The key intuition behind our approach is to use at each timestep a datapoint from the group that is worst off under the current model for updating the model. The ease of implementation and the generality of our robust formulation make it an attractive option for improving model performance on disadvantaged groups. For convex learning problems, such as linear or logistic regression, we provide a fine-grained analysis, proving the rate of convergence to a min-max fair solution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-abernethy22a, title = {Active Sampling for Min-Max Fairness}, author = {Abernethy, Jacob D and Awasthi, Pranjal and Kleindessner, Matth{\"a}us and Morgenstern, Jamie and Russell, Chris and Zhang, Jie}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {53--65}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/abernethy22a/abernethy22a.pdf}, url = {https://proceedings.mlr.press/v162/abernethy22a.html}, abstract = {We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization. The key intuition behind our approach is to use at each timestep a datapoint from the group that is worst off under the current model for updating the model. The ease of implementation and the generality of our robust formulation make it an attractive option for improving model performance on disadvantaged groups. For convex learning problems, such as linear or logistic regression, we provide a fine-grained analysis, proving the rate of convergence to a min-max fair solution.} }
Endnote
%0 Conference Paper %T Active Sampling for Min-Max Fairness %A Jacob D Abernethy %A Pranjal Awasthi %A Matthäus Kleindessner %A Jamie Morgenstern %A Chris Russell %A Jie Zhang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-abernethy22a %I PMLR %P 53--65 %U https://proceedings.mlr.press/v162/abernethy22a.html %V 162 %X We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization. The key intuition behind our approach is to use at each timestep a datapoint from the group that is worst off under the current model for updating the model. The ease of implementation and the generality of our robust formulation make it an attractive option for improving model performance on disadvantaged groups. For convex learning problems, such as linear or logistic regression, we provide a fine-grained analysis, proving the rate of convergence to a min-max fair solution.
APA
Abernethy, J.D., Awasthi, P., Kleindessner, M., Morgenstern, J., Russell, C. & Zhang, J.. (2022). Active Sampling for Min-Max Fairness. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:53-65 Available from https://proceedings.mlr.press/v162/abernethy22a.html.

Related Material