FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation

Yuheng Lai, Leying Guan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:32228-32245, 2025.

Abstract

Equalized odds, an important notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm’s prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a flexible and efficient scheme to promote equalized odds under fairness conditions described by complex and multi-dimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lai25b, title = {{F}air{ICP}: Encouraging Equalized Odds via Inverse Conditional Permutation}, author = {Lai, Yuheng and Guan, Leying}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {32228--32245}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lai25b/lai25b.pdf}, url = {https://proceedings.mlr.press/v267/lai25b.html}, abstract = {Equalized odds, an important notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm’s prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a flexible and efficient scheme to promote equalized odds under fairness conditions described by complex and multi-dimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.} }
Endnote
%0 Conference Paper %T FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation %A Yuheng Lai %A Leying Guan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lai25b %I PMLR %P 32228--32245 %U https://proceedings.mlr.press/v267/lai25b.html %V 267 %X Equalized odds, an important notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm’s prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes under-addressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a flexible and efficient scheme to promote equalized odds under fairness conditions described by complex and multi-dimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.
APA
Lai, Y. & Guan, L.. (2025). FairICP: Encouraging Equalized Odds via Inverse Conditional Permutation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:32228-32245 Available from https://proceedings.mlr.press/v267/lai25b.html.

Related Material