Fairness-aware class imbalanced learning on multiple subgroups

Davoud Ataee Tarzanagh, Bojian Hou, Boning Tong, Qi Long, Li Shen
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:2123-2133, 2023.

Abstract

We present a novel Bayesian-based optimization framework that addresses the challenge of generalization in overparameterized models when dealing with imbalanced subgroups and limited samples per subgroup. Our proposed tri-level optimization framework utilizes local predictors, which are trained on a small amount of data, as well as a fair and class-balanced predictor at the middle and lower levels. To effectively overcome saddle points for minority classes, our lower-level formulation incorporates sharpness-aware minimization. Meanwhile, at the upper level, the framework dynamically adjusts the loss function based on validation loss, ensuring a close alignment between the global predictor and local predictors. Theoretical analysis demonstrates the framework’s ability to enhance classification and fairness generalization, potentially resulting in improvements in the generalization bound. Empirical results validate the superior performance of our tri-level framework compared to existing state-of-the-art approaches. The source code can be found at \url{https://github.com/PennShenLab/FACIMS}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-tarzanagh23a, title = {Fairness-aware class imbalanced learning on multiple subgroups}, author = {Tarzanagh, Davoud Ataee and Hou, Bojian and Tong, Boning and Long, Qi and Shen, Li}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {2123--2133}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/tarzanagh23a/tarzanagh23a.pdf}, url = {https://proceedings.mlr.press/v216/tarzanagh23a.html}, abstract = {We present a novel Bayesian-based optimization framework that addresses the challenge of generalization in overparameterized models when dealing with imbalanced subgroups and limited samples per subgroup. Our proposed tri-level optimization framework utilizes local predictors, which are trained on a small amount of data, as well as a fair and class-balanced predictor at the middle and lower levels. To effectively overcome saddle points for minority classes, our lower-level formulation incorporates sharpness-aware minimization. Meanwhile, at the upper level, the framework dynamically adjusts the loss function based on validation loss, ensuring a close alignment between the global predictor and local predictors. Theoretical analysis demonstrates the framework’s ability to enhance classification and fairness generalization, potentially resulting in improvements in the generalization bound. Empirical results validate the superior performance of our tri-level framework compared to existing state-of-the-art approaches. The source code can be found at \url{https://github.com/PennShenLab/FACIMS}.} }
Endnote
%0 Conference Paper %T Fairness-aware class imbalanced learning on multiple subgroups %A Davoud Ataee Tarzanagh %A Bojian Hou %A Boning Tong %A Qi Long %A Li Shen %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-tarzanagh23a %I PMLR %P 2123--2133 %U https://proceedings.mlr.press/v216/tarzanagh23a.html %V 216 %X We present a novel Bayesian-based optimization framework that addresses the challenge of generalization in overparameterized models when dealing with imbalanced subgroups and limited samples per subgroup. Our proposed tri-level optimization framework utilizes local predictors, which are trained on a small amount of data, as well as a fair and class-balanced predictor at the middle and lower levels. To effectively overcome saddle points for minority classes, our lower-level formulation incorporates sharpness-aware minimization. Meanwhile, at the upper level, the framework dynamically adjusts the loss function based on validation loss, ensuring a close alignment between the global predictor and local predictors. Theoretical analysis demonstrates the framework’s ability to enhance classification and fairness generalization, potentially resulting in improvements in the generalization bound. Empirical results validate the superior performance of our tri-level framework compared to existing state-of-the-art approaches. The source code can be found at \url{https://github.com/PennShenLab/FACIMS}.
APA
Tarzanagh, D.A., Hou, B., Tong, B., Long, Q. & Shen, L.. (2023). Fairness-aware class imbalanced learning on multiple subgroups. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:2123-2133 Available from https://proceedings.mlr.press/v216/tarzanagh23a.html.

Related Material