Uncertainty Matters: Stable Conclusions under Unstable Assessment of Fairness Results

Ainhize Barrainkua, Paula Gordaliza, Jose A. Lozano, Novi Quadrianto
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1198-1206, 2024.

Abstract

Recent studies highlight the effectiveness of Bayesian methods in assessing algorithm performance, particularly in fairness and bias evaluation. We present Uncertainty Matters, a multi-objective uncertainty-aware algorithmic comparison framework. In fairness focused scenarios, it models sensitive group confusion matrices using Bayesian updates and facilitates joint comparison of performance (e.g., accuracy) and fairness metrics (e.g., true positive rate parity). Our approach works seamlessly with common evaluation methods like K-fold cross-validation, effectively addressing dependencies among the K posterior metric distributions. The integration of correlated information is carried out through a procedure tailored to the classifier’s complexity. Experiments demonstrate that the insights derived from algorithmic comparisons employing the Uncertainty Matters approach are more informative, reliable, and less influenced by particular data partitions. Code for the paper is publicly available at \url{https://github.com/abarrainkua/UncertaintyMatters}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-barrainkua24a, title = { Uncertainty Matters: Stable Conclusions under Unstable Assessment of Fairness Results }, author = {Barrainkua, Ainhize and Gordaliza, Paula and Lozano, Jose A. and Quadrianto, Novi}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1198--1206}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/barrainkua24a/barrainkua24a.pdf}, url = {https://proceedings.mlr.press/v238/barrainkua24a.html}, abstract = { Recent studies highlight the effectiveness of Bayesian methods in assessing algorithm performance, particularly in fairness and bias evaluation. We present Uncertainty Matters, a multi-objective uncertainty-aware algorithmic comparison framework. In fairness focused scenarios, it models sensitive group confusion matrices using Bayesian updates and facilitates joint comparison of performance (e.g., accuracy) and fairness metrics (e.g., true positive rate parity). Our approach works seamlessly with common evaluation methods like K-fold cross-validation, effectively addressing dependencies among the K posterior metric distributions. The integration of correlated information is carried out through a procedure tailored to the classifier’s complexity. Experiments demonstrate that the insights derived from algorithmic comparisons employing the Uncertainty Matters approach are more informative, reliable, and less influenced by particular data partitions. Code for the paper is publicly available at \url{https://github.com/abarrainkua/UncertaintyMatters}. } }
Endnote
%0 Conference Paper %T Uncertainty Matters: Stable Conclusions under Unstable Assessment of Fairness Results %A Ainhize Barrainkua %A Paula Gordaliza %A Jose A. Lozano %A Novi Quadrianto %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-barrainkua24a %I PMLR %P 1198--1206 %U https://proceedings.mlr.press/v238/barrainkua24a.html %V 238 %X Recent studies highlight the effectiveness of Bayesian methods in assessing algorithm performance, particularly in fairness and bias evaluation. We present Uncertainty Matters, a multi-objective uncertainty-aware algorithmic comparison framework. In fairness focused scenarios, it models sensitive group confusion matrices using Bayesian updates and facilitates joint comparison of performance (e.g., accuracy) and fairness metrics (e.g., true positive rate parity). Our approach works seamlessly with common evaluation methods like K-fold cross-validation, effectively addressing dependencies among the K posterior metric distributions. The integration of correlated information is carried out through a procedure tailored to the classifier’s complexity. Experiments demonstrate that the insights derived from algorithmic comparisons employing the Uncertainty Matters approach are more informative, reliable, and less influenced by particular data partitions. Code for the paper is publicly available at \url{https://github.com/abarrainkua/UncertaintyMatters}.
APA
Barrainkua, A., Gordaliza, P., Lozano, J.A. & Quadrianto, N.. (2024). Uncertainty Matters: Stable Conclusions under Unstable Assessment of Fairness Results . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1198-1206 Available from https://proceedings.mlr.press/v238/barrainkua24a.html.

Related Material