FairRR: Pre-Processing for Group Fairness through Randomized Response

Joshua John Ward, Xianli Zeng, Guang Cheng
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3826-3834, 2024.

Abstract

The increasing usage of machine learning models in consequential decision-making processes has spurred research into the fairness of these systems. While significant work has been done to study group fairness in the in-processing and post-processing setting, there has been little that theoretically connects these results to the pre-processing domain. This paper extends recent fair statistical learning results and proposes that achieving group fairness in downstream models can be formulated as finding the optimal design matrix in which to modify a response variable in a Randomized Response framework. We show that measures of group fairness can be directly controlled for with optimal model utility, proposing a pre-processing algorithm called FairRR that yields excellent downstream model utility and fairness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-john-ward24a, title = {{F}air{RR}: Pre-Processing for Group Fairness through Randomized Response}, author = {John Ward, Joshua and Zeng, Xianli and Cheng, Guang}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3826--3834}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/john-ward24a/john-ward24a.pdf}, url = {https://proceedings.mlr.press/v238/john-ward24a.html}, abstract = {The increasing usage of machine learning models in consequential decision-making processes has spurred research into the fairness of these systems. While significant work has been done to study group fairness in the in-processing and post-processing setting, there has been little that theoretically connects these results to the pre-processing domain. This paper extends recent fair statistical learning results and proposes that achieving group fairness in downstream models can be formulated as finding the optimal design matrix in which to modify a response variable in a Randomized Response framework. We show that measures of group fairness can be directly controlled for with optimal model utility, proposing a pre-processing algorithm called FairRR that yields excellent downstream model utility and fairness.} }
Endnote
%0 Conference Paper %T FairRR: Pre-Processing for Group Fairness through Randomized Response %A Joshua John Ward %A Xianli Zeng %A Guang Cheng %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-john-ward24a %I PMLR %P 3826--3834 %U https://proceedings.mlr.press/v238/john-ward24a.html %V 238 %X The increasing usage of machine learning models in consequential decision-making processes has spurred research into the fairness of these systems. While significant work has been done to study group fairness in the in-processing and post-processing setting, there has been little that theoretically connects these results to the pre-processing domain. This paper extends recent fair statistical learning results and proposes that achieving group fairness in downstream models can be formulated as finding the optimal design matrix in which to modify a response variable in a Randomized Response framework. We show that measures of group fairness can be directly controlled for with optimal model utility, proposing a pre-processing algorithm called FairRR that yields excellent downstream model utility and fairness.
APA
John Ward, J., Zeng, X. & Cheng, G.. (2024). FairRR: Pre-Processing for Group Fairness through Randomized Response. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3826-3834 Available from https://proceedings.mlr.press/v238/john-ward24a.html.

Related Material