[edit]
Stochastic Fairness Interventions Are Arbitrary
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:322-328, 2025.
Abstract
Bias mitigation techniques offer the opportunity to intervene on statistical models so to reduce the risk that these will discriminate towards certain groups. These techniques rely on learning a mapping from the sensitive data $S$ to some decision variable $\hat{Y}$, usually mediated by the non-sensitive covariates $X$. Some of the methods available in this space propose to learn a stochastic mapping, which has several theoretical benefits from a computational perspective: namely, randomization makes it possible to compute certain mitigation objectives, and widens the search space for “optimal” models. From the perspective of procedural fairness, however, stochastic mappings may imply arbitrary decisions. In this paper, we study and discuss the distribution of arbitrariness in popular randomized bias mitigation techniques which are currently available in standard fairness toolkits. We find that individuals belonging to different groups may have different risks for arbitrariness; furthermore, we observe different patterns of arbitrariness for different randomized mitigation strategies, and discuss possible causes for this general phenomenon.