Optimal Fair Learning Robust to Adversarial Distribution Shift

Sushant Agarwal, Amit Deshpande, Rajmohan Rajaraman, Ravi Sundaram
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:513-530, 2025.

Abstract

Previous work in fair machine learning has characterised the Fair Bayes Optimal Classifier (BOC) on a given distribution for both deterministic and randomized classifiers. We study the robustness of the Fair BOC to adversarial noise in the data distribution. Kearns & Li (1988) implies that the accuracy of the deterministic BOC without any fairness constraints is robust (Lipschitz) to malicious noise in the data distribution. We demonstrate that their robustness guarantee breaks down when we add fairness constraints. Hence, we consider the randomized Fair BOC, and our central result is that its accuracy is robust to malicious noise in the data distribution. Our robustness result applies to various fairness constraints—Demographic Parity, Equal Opportunity, Predictive Equality. Beyond robustness, we demonstrate that randomization leads to better accuracy and efficiency. We show that the randomized Fair BOC is nearly-deterministic, and gives randomized predictions on at most one data point, hence availing numerous benefits of randomness, while using very little of it.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-agarwal25b, title = {Optimal Fair Learning Robust to Adversarial Distribution Shift}, author = {Agarwal, Sushant and Deshpande, Amit and Rajaraman, Rajmohan and Sundaram, Ravi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {513--530}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/agarwal25b/agarwal25b.pdf}, url = {https://proceedings.mlr.press/v267/agarwal25b.html}, abstract = {Previous work in fair machine learning has characterised the Fair Bayes Optimal Classifier (BOC) on a given distribution for both deterministic and randomized classifiers. We study the robustness of the Fair BOC to adversarial noise in the data distribution. Kearns & Li (1988) implies that the accuracy of the deterministic BOC without any fairness constraints is robust (Lipschitz) to malicious noise in the data distribution. We demonstrate that their robustness guarantee breaks down when we add fairness constraints. Hence, we consider the randomized Fair BOC, and our central result is that its accuracy is robust to malicious noise in the data distribution. Our robustness result applies to various fairness constraints—Demographic Parity, Equal Opportunity, Predictive Equality. Beyond robustness, we demonstrate that randomization leads to better accuracy and efficiency. We show that the randomized Fair BOC is nearly-deterministic, and gives randomized predictions on at most one data point, hence availing numerous benefits of randomness, while using very little of it.} }
Endnote
%0 Conference Paper %T Optimal Fair Learning Robust to Adversarial Distribution Shift %A Sushant Agarwal %A Amit Deshpande %A Rajmohan Rajaraman %A Ravi Sundaram %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-agarwal25b %I PMLR %P 513--530 %U https://proceedings.mlr.press/v267/agarwal25b.html %V 267 %X Previous work in fair machine learning has characterised the Fair Bayes Optimal Classifier (BOC) on a given distribution for both deterministic and randomized classifiers. We study the robustness of the Fair BOC to adversarial noise in the data distribution. Kearns & Li (1988) implies that the accuracy of the deterministic BOC without any fairness constraints is robust (Lipschitz) to malicious noise in the data distribution. We demonstrate that their robustness guarantee breaks down when we add fairness constraints. Hence, we consider the randomized Fair BOC, and our central result is that its accuracy is robust to malicious noise in the data distribution. Our robustness result applies to various fairness constraints—Demographic Parity, Equal Opportunity, Predictive Equality. Beyond robustness, we demonstrate that randomization leads to better accuracy and efficiency. We show that the randomized Fair BOC is nearly-deterministic, and gives randomized predictions on at most one data point, hence availing numerous benefits of randomness, while using very little of it.
APA
Agarwal, S., Deshpande, A., Rajaraman, R. & Sundaram, R.. (2025). Optimal Fair Learning Robust to Adversarial Distribution Shift. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:513-530 Available from https://proceedings.mlr.press/v267/agarwal25b.html.

Related Material