FACT: A Diagnostic for Group Fairness Trade-offs

Joon Sik Kim, Jiahao Chen, Ameet Talwalkar
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5264-5274, 2020.

Abstract

Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model’s predictive performance. We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness. We observe that the majority of group fairness notions can be expressed via the fairness-confusion tensor, which is the confusion matrix split according to the protected attribute values. We frame several optimization problems that directly optimize both accuracy and fairness objectives over the elements of this tensor, which yield a general perspective for understanding multiple trade-offs including group fairness incompatibilities. It also suggests an alternate post-processing method for designing fair classifiers. On synthetic and real datasets, we demonstrate the use cases of our diagnostic, particularly on understanding the trade-off landscape between accuracy and fairness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-kim20a, title = {{FACT}: A Diagnostic for Group Fairness Trade-offs}, author = {Kim, Joon Sik and Chen, Jiahao and Talwalkar, Ameet}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5264--5274}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/kim20a/kim20a.pdf}, url = {https://proceedings.mlr.press/v119/kim20a.html}, abstract = {Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model’s predictive performance. We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness. We observe that the majority of group fairness notions can be expressed via the fairness-confusion tensor, which is the confusion matrix split according to the protected attribute values. We frame several optimization problems that directly optimize both accuracy and fairness objectives over the elements of this tensor, which yield a general perspective for understanding multiple trade-offs including group fairness incompatibilities. It also suggests an alternate post-processing method for designing fair classifiers. On synthetic and real datasets, we demonstrate the use cases of our diagnostic, particularly on understanding the trade-off landscape between accuracy and fairness.} }
Endnote
%0 Conference Paper %T FACT: A Diagnostic for Group Fairness Trade-offs %A Joon Sik Kim %A Jiahao Chen %A Ameet Talwalkar %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-kim20a %I PMLR %P 5264--5274 %U https://proceedings.mlr.press/v119/kim20a.html %V 119 %X Group fairness, a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes, has been shown to conflict with one another, often with a necessary cost in loss of model’s predictive performance. We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness. We observe that the majority of group fairness notions can be expressed via the fairness-confusion tensor, which is the confusion matrix split according to the protected attribute values. We frame several optimization problems that directly optimize both accuracy and fairness objectives over the elements of this tensor, which yield a general perspective for understanding multiple trade-offs including group fairness incompatibilities. It also suggests an alternate post-processing method for designing fair classifiers. On synthetic and real datasets, we demonstrate the use cases of our diagnostic, particularly on understanding the trade-off landscape between accuracy and fairness.
APA
Kim, J.S., Chen, J. & Talwalkar, A.. (2020). FACT: A Diagnostic for Group Fairness Trade-offs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5264-5274 Available from https://proceedings.mlr.press/v119/kim20a.html.

Related Material