Achieving Group Distributional Robustness and Minimax Group Fairness with Interpolating Classifiers

Natalia L. Martinez, Martin A. Bertran, Guillermo Sapiro
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2629-2637, 2024.

Abstract

Group distributional robustness optimization methods (GDRO) learn models that guarantee performance across a broad set of demographics. GDRO is often framed as a minimax game where an adversary proposes data distributions under which the model performs poorly; importance weights are used to mimic the adversarial distribution on finite samples. Prior work has show that applying GDRO with interpolating classifiers requires strong regularization to generalize to unseen data. Moreover, these classifiers are not responsive to importance weights in the asymptotic training regime. In this work we propose Bi-level GDRO, a provably convergent formulation that decouples the adversary’s and model learner’s objective and improves generalization guarantees. To address non-responsiveness of importance weights, we combine Bi-level GDRO with a learner that optimizes a temperature-scaled loss that can provably trade off performance between demographics, even on interpolating classifiers. We experimentally demonstrate the effectiveness of our proposed method on learning minimax classifiers on a variety of datasets. Code is available at github.com/MartinBertran/BiLevelGDRO.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-martinez24a, title = {Achieving Group Distributional Robustness and Minimax Group Fairness with Interpolating Classifiers}, author = {Martinez, Natalia L. and Bertran, Martin A. and Sapiro, Guillermo}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2629--2637}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/martinez24a/martinez24a.pdf}, url = {https://proceedings.mlr.press/v238/martinez24a.html}, abstract = {Group distributional robustness optimization methods (GDRO) learn models that guarantee performance across a broad set of demographics. GDRO is often framed as a minimax game where an adversary proposes data distributions under which the model performs poorly; importance weights are used to mimic the adversarial distribution on finite samples. Prior work has show that applying GDRO with interpolating classifiers requires strong regularization to generalize to unseen data. Moreover, these classifiers are not responsive to importance weights in the asymptotic training regime. In this work we propose Bi-level GDRO, a provably convergent formulation that decouples the adversary’s and model learner’s objective and improves generalization guarantees. To address non-responsiveness of importance weights, we combine Bi-level GDRO with a learner that optimizes a temperature-scaled loss that can provably trade off performance between demographics, even on interpolating classifiers. We experimentally demonstrate the effectiveness of our proposed method on learning minimax classifiers on a variety of datasets. Code is available at github.com/MartinBertran/BiLevelGDRO.} }
Endnote
%0 Conference Paper %T Achieving Group Distributional Robustness and Minimax Group Fairness with Interpolating Classifiers %A Natalia L. Martinez %A Martin A. Bertran %A Guillermo Sapiro %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-martinez24a %I PMLR %P 2629--2637 %U https://proceedings.mlr.press/v238/martinez24a.html %V 238 %X Group distributional robustness optimization methods (GDRO) learn models that guarantee performance across a broad set of demographics. GDRO is often framed as a minimax game where an adversary proposes data distributions under which the model performs poorly; importance weights are used to mimic the adversarial distribution on finite samples. Prior work has show that applying GDRO with interpolating classifiers requires strong regularization to generalize to unseen data. Moreover, these classifiers are not responsive to importance weights in the asymptotic training regime. In this work we propose Bi-level GDRO, a provably convergent formulation that decouples the adversary’s and model learner’s objective and improves generalization guarantees. To address non-responsiveness of importance weights, we combine Bi-level GDRO with a learner that optimizes a temperature-scaled loss that can provably trade off performance between demographics, even on interpolating classifiers. We experimentally demonstrate the effectiveness of our proposed method on learning minimax classifiers on a variety of datasets. Code is available at github.com/MartinBertran/BiLevelGDRO.
APA
Martinez, N.L., Bertran, M.A. & Sapiro, G.. (2024). Achieving Group Distributional Robustness and Minimax Group Fairness with Interpolating Classifiers. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2629-2637 Available from https://proceedings.mlr.press/v238/martinez24a.html.

Related Material