The cost of fairness in binary classification
[edit]
Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:107118, 2018.
Abstract
Binary classifiers are often required to possess fairness in the sense of not overly discriminating with respect to a feature deemed sensitive e.g. race. We study the inherent tradeoffs in learning classifiers with a fairness constraint in the form of two questions: what is the best accuracy we can expect for a given level of fairness?, and what is the nature of these optimal fairnessaware classifiers? To answer these questions, we provide three main contributions. First, we relate two existing fairness measures to costsensitive risks. Second, we show that for such costsensitive fairness measures, the optimal classifier is an instancedependent thresholding of the classprobability function. Third, we relate the tradeoff between accuracy and fairness to the alignment between the target and sensitive featuresâ€™ classprobabilities. A practical implication of our analysis is a simple approach to the fairnessaware problem which involves suitably thresholding classprobability estimates.
Related Material


