Testing Group Fairness via Optimal Transport Projections

Nian Si, Karthyek Murthy, Jose Blanchet, Viet Anh Nguyen
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:9649-9659, 2021.

Abstract

We have developed a statistical testing framework to detect if a given machine learning classifier fails to satisfy a wide range of group fairness notions. Our test is a flexible, interpretable, and statistically rigorous tool for auditing whether exhibited biases are intrinsic to the algorithm or simply due to the randomness in the data. The statistical challenges, which may arise from multiple impact criteria that define group fairness and which are discontinuous on model parameters, are conveniently tackled by projecting the empirical measure to the set of group-fair probability models using optimal transport. This statistic is efficiently computed using linear programming, and its asymptotic distribution is explicitly obtained. The proposed framework can also be used to test for composite fairness hypotheses and fairness with multiple sensitive attributes. The optimal transport testing formulation improves interpretability by characterizing the minimal covariate perturbations that eliminate the bias observed in the audit.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-si21a, title = {Testing Group Fairness via Optimal Transport Projections}, author = {Si, Nian and Murthy, Karthyek and Blanchet, Jose and Nguyen, Viet Anh}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {9649--9659}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/si21a/si21a.pdf}, url = {https://proceedings.mlr.press/v139/si21a.html}, abstract = {We have developed a statistical testing framework to detect if a given machine learning classifier fails to satisfy a wide range of group fairness notions. Our test is a flexible, interpretable, and statistically rigorous tool for auditing whether exhibited biases are intrinsic to the algorithm or simply due to the randomness in the data. The statistical challenges, which may arise from multiple impact criteria that define group fairness and which are discontinuous on model parameters, are conveniently tackled by projecting the empirical measure to the set of group-fair probability models using optimal transport. This statistic is efficiently computed using linear programming, and its asymptotic distribution is explicitly obtained. The proposed framework can also be used to test for composite fairness hypotheses and fairness with multiple sensitive attributes. The optimal transport testing formulation improves interpretability by characterizing the minimal covariate perturbations that eliminate the bias observed in the audit.} }
Endnote
%0 Conference Paper %T Testing Group Fairness via Optimal Transport Projections %A Nian Si %A Karthyek Murthy %A Jose Blanchet %A Viet Anh Nguyen %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-si21a %I PMLR %P 9649--9659 %U https://proceedings.mlr.press/v139/si21a.html %V 139 %X We have developed a statistical testing framework to detect if a given machine learning classifier fails to satisfy a wide range of group fairness notions. Our test is a flexible, interpretable, and statistically rigorous tool for auditing whether exhibited biases are intrinsic to the algorithm or simply due to the randomness in the data. The statistical challenges, which may arise from multiple impact criteria that define group fairness and which are discontinuous on model parameters, are conveniently tackled by projecting the empirical measure to the set of group-fair probability models using optimal transport. This statistic is efficiently computed using linear programming, and its asymptotic distribution is explicitly obtained. The proposed framework can also be used to test for composite fairness hypotheses and fairness with multiple sensitive attributes. The optimal transport testing formulation improves interpretability by characterizing the minimal covariate perturbations that eliminate the bias observed in the audit.
APA
Si, N., Murthy, K., Blanchet, J. & Nguyen, V.A.. (2021). Testing Group Fairness via Optimal Transport Projections. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:9649-9659 Available from https://proceedings.mlr.press/v139/si21a.html.

Related Material