Stochastic Fairness Interventions Are Arbitrary

Mattia Cerrato, Marius Köppel, Kiara Stempel, Philipp Wolf, Stefan Kramer
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:322-328, 2025.

Abstract

Bias mitigation techniques offer the opportunity to intervene on statistical models so to reduce the risk that these will discriminate towards certain groups. These techniques rely on learning a mapping from the sensitive data $S$ to some decision variable $\hat{Y}$, usually mediated by the non-sensitive covariates $X$. Some of the methods available in this space propose to learn a stochastic mapping, which has several theoretical benefits from a computational perspective: namely, randomization makes it possible to compute certain mitigation objectives, and widens the search space for “optimal” models. From the perspective of procedural fairness, however, stochastic mappings may imply arbitrary decisions. In this paper, we study and discuss the distribution of arbitrariness in popular randomized bias mitigation techniques which are currently available in standard fairness toolkits. We find that individuals belonging to different groups may have different risks for arbitrariness; furthermore, we observe different patterns of arbitrariness for different randomized mitigation strategies, and discuss possible causes for this general phenomenon.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-cerrato25a, title = {Stochastic Fairness Interventions Are Arbitrary}, author = {Cerrato, Mattia and K\"oppel, Marius and Stempel, Kiara and Wolf, Philipp and Kramer, Stefan}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {322--328}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and Corrêa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/cerrato25a/cerrato25a.pdf}, url = {https://proceedings.mlr.press/v294/cerrato25a.html}, abstract = {Bias mitigation techniques offer the opportunity to intervene on statistical models so to reduce the risk that these will discriminate towards certain groups. These techniques rely on learning a mapping from the sensitive data $S$ to some decision variable $\hat{Y}$, usually mediated by the non-sensitive covariates $X$. Some of the methods available in this space propose to learn a stochastic mapping, which has several theoretical benefits from a computational perspective: namely, randomization makes it possible to compute certain mitigation objectives, and widens the search space for “optimal” models. From the perspective of procedural fairness, however, stochastic mappings may imply arbitrary decisions. In this paper, we study and discuss the distribution of arbitrariness in popular randomized bias mitigation techniques which are currently available in standard fairness toolkits. We find that individuals belonging to different groups may have different risks for arbitrariness; furthermore, we observe different patterns of arbitrariness for different randomized mitigation strategies, and discuss possible causes for this general phenomenon.} }
Endnote
%0 Conference Paper %T Stochastic Fairness Interventions Are Arbitrary %A Mattia Cerrato %A Marius Köppel %A Kiara Stempel %A Philipp Wolf %A Stefan Kramer %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria Corrêa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-cerrato25a %I PMLR %P 322--328 %U https://proceedings.mlr.press/v294/cerrato25a.html %V 294 %X Bias mitigation techniques offer the opportunity to intervene on statistical models so to reduce the risk that these will discriminate towards certain groups. These techniques rely on learning a mapping from the sensitive data $S$ to some decision variable $\hat{Y}$, usually mediated by the non-sensitive covariates $X$. Some of the methods available in this space propose to learn a stochastic mapping, which has several theoretical benefits from a computational perspective: namely, randomization makes it possible to compute certain mitigation objectives, and widens the search space for “optimal” models. From the perspective of procedural fairness, however, stochastic mappings may imply arbitrary decisions. In this paper, we study and discuss the distribution of arbitrariness in popular randomized bias mitigation techniques which are currently available in standard fairness toolkits. We find that individuals belonging to different groups may have different risks for arbitrariness; furthermore, we observe different patterns of arbitrariness for different randomized mitigation strategies, and discuss possible causes for this general phenomenon.
APA
Cerrato, M., Köppel, M., Stempel, K., Wolf, P. & Kramer, S.. (2025). Stochastic Fairness Interventions Are Arbitrary. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:322-328 Available from https://proceedings.mlr.press/v294/cerrato25a.html.

Related Material