ABCFair: an Adaptable Benchmark approach for Comparing Fairness methods

MaryBeth Defrance, Maarten Buyl, Tijl De Bie
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:383-388, 2025.

Abstract

Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, these subtle differences make it highly complicated to benchmark fairness methods, as their performance can strongly depend on exactly how the bias mitigation problem was originally framed. Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. In this extended abstract provide a summary from the results of applying ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-defrance25a, title = {ABCFair: an Adaptable Benchmark approach for Comparing Fairness methods}, author = {Defrance, MaryBeth and Buyl, Maarten and De Bie, Tijl}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {383--388}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and CorrĂȘa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/defrance25a/defrance25a.pdf}, url = {https://proceedings.mlr.press/v294/defrance25a.html}, abstract = {Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, these subtle differences make it highly complicated to benchmark fairness methods, as their performance can strongly depend on exactly how the bias mitigation problem was originally framed. Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. In this extended abstract provide a summary from the results of applying ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.} }
Endnote
%0 Conference Paper %T ABCFair: an Adaptable Benchmark approach for Comparing Fairness methods %A MaryBeth Defrance %A Maarten Buyl %A Tijl De Bie %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria CorrĂȘa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-defrance25a %I PMLR %P 383--388 %U https://proceedings.mlr.press/v294/defrance25a.html %V 294 %X Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, these subtle differences make it highly complicated to benchmark fairness methods, as their performance can strongly depend on exactly how the bias mitigation problem was originally framed. Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. In this extended abstract provide a summary from the results of applying ABCFair to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.
APA
Defrance, M., Buyl, M. & De Bie, T.. (2025). ABCFair: an Adaptable Benchmark approach for Comparing Fairness methods. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:383-388 Available from https://proceedings.mlr.press/v294/defrance25a.html.

Related Material