Standardized Interpretable Fairness Measures for Continuous Risk Scores

Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:3327-3346, 2024.

Abstract

We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-becker24a, title = {Standardized Interpretable Fairness Measures for Continuous Risk Scores}, author = {Becker, Ann-Kristin and Dumitrasc, Oana and Broelemann, Klaus}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {3327--3346}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/becker24a/becker24a.pdf}, url = {https://proceedings.mlr.press/v235/becker24a.html}, abstract = {We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.} }
Endnote
%0 Conference Paper %T Standardized Interpretable Fairness Measures for Continuous Risk Scores %A Ann-Kristin Becker %A Oana Dumitrasc %A Klaus Broelemann %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-becker24a %I PMLR %P 3327--3346 %U https://proceedings.mlr.press/v235/becker24a.html %V 235 %X We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.
APA
Becker, A., Dumitrasc, O. & Broelemann, K.. (2024). Standardized Interpretable Fairness Measures for Continuous Risk Scores. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:3327-3346 Available from https://proceedings.mlr.press/v235/becker24a.html.

Related Material