[edit]
Standardized Interpretable Fairness Measures for Continuous Risk Scores
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:3327-3346, 2024.
Abstract
We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.