FRAPPÉ: A Group Fairness Framework for Post-Processing Everything

Alexandru Tifrea, Preethi Lahoti, Ben Packer, Yoni Halpern, Ahmad Beirami, Flavien Prost
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:48321-48343, 2024.

Abstract

Despite achieving promising fairness-error trade-offs, in-processing mitigation techniques for group fairness cannot be employed in numerous practical applications with limited computation resources or no access to the training pipeline of the prediction model. In these situations, post-processing is a viable alternative. However, current methods are tailored to specific problem settings and fairness definitions and hence, are not as broadly applicable as in-processing. In this work, we propose a framework that turns any regularized in-processing method into a post-processing approach. This procedure prescribes a way to obtain post-processing techniques for a much broader range of problem settings than the prior post-processing literature. We show theoretically and through extensive experiments that our framework preserves the good fairness-error trade-offs achieved with in-processing and can improve over the effectiveness of prior post-processing methods. Finally, we demonstrate several advantages of a modular mitigation strategy that disentangles the training of the prediction model from the fairness mitigation, including better performance on tasks with partial group labels.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-tifrea24a, title = {{FRAPP}É: A Group Fairness Framework for Post-Processing Everything}, author = {Tifrea, Alexandru and Lahoti, Preethi and Packer, Ben and Halpern, Yoni and Beirami, Ahmad and Prost, Flavien}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {48321--48343}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/tifrea24a/tifrea24a.pdf}, url = {https://proceedings.mlr.press/v235/tifrea24a.html}, abstract = {Despite achieving promising fairness-error trade-offs, in-processing mitigation techniques for group fairness cannot be employed in numerous practical applications with limited computation resources or no access to the training pipeline of the prediction model. In these situations, post-processing is a viable alternative. However, current methods are tailored to specific problem settings and fairness definitions and hence, are not as broadly applicable as in-processing. In this work, we propose a framework that turns any regularized in-processing method into a post-processing approach. This procedure prescribes a way to obtain post-processing techniques for a much broader range of problem settings than the prior post-processing literature. We show theoretically and through extensive experiments that our framework preserves the good fairness-error trade-offs achieved with in-processing and can improve over the effectiveness of prior post-processing methods. Finally, we demonstrate several advantages of a modular mitigation strategy that disentangles the training of the prediction model from the fairness mitigation, including better performance on tasks with partial group labels.} }
Endnote
%0 Conference Paper %T FRAPPÉ: A Group Fairness Framework for Post-Processing Everything %A Alexandru Tifrea %A Preethi Lahoti %A Ben Packer %A Yoni Halpern %A Ahmad Beirami %A Flavien Prost %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-tifrea24a %I PMLR %P 48321--48343 %U https://proceedings.mlr.press/v235/tifrea24a.html %V 235 %X Despite achieving promising fairness-error trade-offs, in-processing mitigation techniques for group fairness cannot be employed in numerous practical applications with limited computation resources or no access to the training pipeline of the prediction model. In these situations, post-processing is a viable alternative. However, current methods are tailored to specific problem settings and fairness definitions and hence, are not as broadly applicable as in-processing. In this work, we propose a framework that turns any regularized in-processing method into a post-processing approach. This procedure prescribes a way to obtain post-processing techniques for a much broader range of problem settings than the prior post-processing literature. We show theoretically and through extensive experiments that our framework preserves the good fairness-error trade-offs achieved with in-processing and can improve over the effectiveness of prior post-processing methods. Finally, we demonstrate several advantages of a modular mitigation strategy that disentangles the training of the prediction model from the fairness mitigation, including better performance on tasks with partial group labels.
APA
Tifrea, A., Lahoti, P., Packer, B., Halpern, Y., Beirami, A. & Prost, F.. (2024). FRAPPÉ: A Group Fairness Framework for Post-Processing Everything. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:48321-48343 Available from https://proceedings.mlr.press/v235/tifrea24a.html.

Related Material