The cost of fairness in binary classification

Aditya Krishna Menon, Robert C Williamson
Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:107-118, 2018.

Abstract

Binary classifiers are often required to possess fairness in the sense of not overly discriminating with respect to a feature deemed sensitive e.g. race. We study the inherent tradeoffs in learning classifiers with a fairness constraint in the form of two questions: what is the best accuracy we can expect for a given level of fairness?, and what is the nature of these optimal fairness-aware classifiers? To answer these questions, we provide three main contributions. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for such cost-sensitive fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we relate the tradeoff between accuracy and fairness to the alignment between the target and sensitive features’ class-probabilities. A practical implication of our analysis is a simple approach to the fairness-aware problem which involves suitably thresholding class-probability estimates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v81-menon18a, title = {The cost of fairness in binary classification}, author = {Menon, Aditya Krishna and Williamson, Robert C}, booktitle = {Proceedings of the 1st Conference on Fairness, Accountability and Transparency}, pages = {107--118}, year = {2018}, editor = {Friedler, Sorelle A. and Wilson, Christo}, volume = {81}, series = {Proceedings of Machine Learning Research}, month = {23--24 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v81/menon18a/menon18a.pdf}, url = {https://proceedings.mlr.press/v81/menon18a.html}, abstract = {Binary classifiers are often required to possess fairness in the sense of not overly discriminating with respect to a feature deemed sensitive e.g. race. We study the inherent tradeoffs in learning classifiers with a fairness constraint in the form of two questions: what is the best accuracy we can expect for a given level of fairness?, and what is the nature of these optimal fairness-aware classifiers? To answer these questions, we provide three main contributions. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for such cost-sensitive fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we relate the tradeoff between accuracy and fairness to the alignment between the target and sensitive features’ class-probabilities. A practical implication of our analysis is a simple approach to the fairness-aware problem which involves suitably thresholding class-probability estimates.} }
Endnote
%0 Conference Paper %T The cost of fairness in binary classification %A Aditya Krishna Menon %A Robert C Williamson %B Proceedings of the 1st Conference on Fairness, Accountability and Transparency %C Proceedings of Machine Learning Research %D 2018 %E Sorelle A. Friedler %E Christo Wilson %F pmlr-v81-menon18a %I PMLR %P 107--118 %U https://proceedings.mlr.press/v81/menon18a.html %V 81 %X Binary classifiers are often required to possess fairness in the sense of not overly discriminating with respect to a feature deemed sensitive e.g. race. We study the inherent tradeoffs in learning classifiers with a fairness constraint in the form of two questions: what is the best accuracy we can expect for a given level of fairness?, and what is the nature of these optimal fairness-aware classifiers? To answer these questions, we provide three main contributions. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for such cost-sensitive fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we relate the tradeoff between accuracy and fairness to the alignment between the target and sensitive features’ class-probabilities. A practical implication of our analysis is a simple approach to the fairness-aware problem which involves suitably thresholding class-probability estimates.
APA
Menon, A.K. & Williamson, R.C.. (2018). The cost of fairness in binary classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research 81:107-118 Available from https://proceedings.mlr.press/v81/menon18a.html.

Related Material