Fair for All: Best-effort Fairness Guarantees for Classification

Anilesh Krishnaswamy, Zhihao Jiang, Kangning Wang, Yu Cheng, Kamesh Munagala
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3259-3267, 2021.

Abstract

Standard approaches to group-based notions of fairness, such as parity and equalized odds, try to equalize absolute measures of performance across known groups (based on race, gender, etc.). Consequently, a group that is inherently harder to classify may hold back the performance on other groups; and no guarantees can be provided for unforeseen groups. Instead, we propose a fairness notion whose guarantee, on each group g in a class G, is relative to the performance of the best classifier on g. We apply this notion to broad classes of groups, in particular, where (a) G consists of all possible groups (subsets) in the data, and (b) G is more streamlined. For the first setting, which is akin to groups being completely unknown, we devise the PF (Proportional Fairness) classifier, which guarantees, on any possible group g, an accuracy that is proportional to that of the optimal classifier for g, scaled by the relative size of g in the data set. Due to including all possible groups, some of which could be too complex to be relevant, the worst-case theoretical guarantees here have to be proportionally weaker for smaller subsets. For the second setting, we devise the BeFair (Best-effort Fair) framework which seeks an accuracy, on every gG, which approximates that of the optimal classifier on g, independent of the size of g. Aiming for such a guarantee results in a non-convex problem, and we design novel techniques to get around this difficulty when G is the set of linear hypotheses. We test our algorithms on real-world data sets, and present interesting comparative insights on their performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-krishnaswamy21a, title = { Fair for All: Best-effort Fairness Guarantees for Classification }, author = {Krishnaswamy, Anilesh and Jiang, Zhihao and Wang, Kangning and Cheng, Yu and Munagala, Kamesh}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3259--3267}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/krishnaswamy21a/krishnaswamy21a.pdf}, url = {https://proceedings.mlr.press/v130/krishnaswamy21a.html}, abstract = { Standard approaches to group-based notions of fairness, such as parity and equalized odds, try to equalize absolute measures of performance across known groups (based on race, gender, etc.). Consequently, a group that is inherently harder to classify may hold back the performance on other groups; and no guarantees can be provided for unforeseen groups. Instead, we propose a fairness notion whose guarantee, on each group $g$ in a class $\mathcal{G}$, is relative to the performance of the best classifier on $g$. We apply this notion to broad classes of groups, in particular, where (a) $\mathcal{G}$ consists of all possible groups (subsets) in the data, and (b) $\mathcal{G}$ is more streamlined. For the first setting, which is akin to groups being completely unknown, we devise the PF (Proportional Fairness) classifier, which guarantees, on any possible group $g$, an accuracy that is proportional to that of the optimal classifier for $g$, scaled by the relative size of $g$ in the data set. Due to including all possible groups, some of which could be too complex to be relevant, the worst-case theoretical guarantees here have to be proportionally weaker for smaller subsets. For the second setting, we devise the BeFair (Best-effort Fair) framework which seeks an accuracy, on every $g \in \mathcal{G}$, which approximates that of the optimal classifier on $g$, independent of the size of $g$. Aiming for such a guarantee results in a non-convex problem, and we design novel techniques to get around this difficulty when $\mathcal{G}$ is the set of linear hypotheses. We test our algorithms on real-world data sets, and present interesting comparative insights on their performance. } }
Endnote
%0 Conference Paper %T Fair for All: Best-effort Fairness Guarantees for Classification %A Anilesh Krishnaswamy %A Zhihao Jiang %A Kangning Wang %A Yu Cheng %A Kamesh Munagala %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-krishnaswamy21a %I PMLR %P 3259--3267 %U https://proceedings.mlr.press/v130/krishnaswamy21a.html %V 130 %X Standard approaches to group-based notions of fairness, such as parity and equalized odds, try to equalize absolute measures of performance across known groups (based on race, gender, etc.). Consequently, a group that is inherently harder to classify may hold back the performance on other groups; and no guarantees can be provided for unforeseen groups. Instead, we propose a fairness notion whose guarantee, on each group $g$ in a class $\mathcal{G}$, is relative to the performance of the best classifier on $g$. We apply this notion to broad classes of groups, in particular, where (a) $\mathcal{G}$ consists of all possible groups (subsets) in the data, and (b) $\mathcal{G}$ is more streamlined. For the first setting, which is akin to groups being completely unknown, we devise the PF (Proportional Fairness) classifier, which guarantees, on any possible group $g$, an accuracy that is proportional to that of the optimal classifier for $g$, scaled by the relative size of $g$ in the data set. Due to including all possible groups, some of which could be too complex to be relevant, the worst-case theoretical guarantees here have to be proportionally weaker for smaller subsets. For the second setting, we devise the BeFair (Best-effort Fair) framework which seeks an accuracy, on every $g \in \mathcal{G}$, which approximates that of the optimal classifier on $g$, independent of the size of $g$. Aiming for such a guarantee results in a non-convex problem, and we design novel techniques to get around this difficulty when $\mathcal{G}$ is the set of linear hypotheses. We test our algorithms on real-world data sets, and present interesting comparative insights on their performance.
APA
Krishnaswamy, A., Jiang, Z., Wang, K., Cheng, Y. & Munagala, K.. (2021). Fair for All: Best-effort Fairness Guarantees for Classification . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3259-3267 Available from https://proceedings.mlr.press/v130/krishnaswamy21a.html.

Related Material