Characterizing Intersectional Group Fairness with Worst-Case Comparisons

Avijit Ghosh, Lea Genuit, Mary Reagan
Proceedings of 2nd Workshop on Diversity in Artificial Intelligence (AIDBEI), PMLR 142:22-34, 2021.

Abstract

Machine Learning or Artificial Intelligence algorithms have gained considerable scrutiny in recent times owing to their propensity towards imitating and amplifying existing prejudices in society. This has led to a niche but growing body of work that identifies and attempts to fix these biases. A first step towards making these algorithms more fair is designing metrics that measure unfairness. Most existing work in this field deals with either a binary view of fairness (protected vs. unprotected groups) or politically defined categories (race or gender). Such categorization misses the important nuance of intersectionality - biases can often be amplified in subgroups that combine membership from different categories, especially if such a subgroup is particularly underrepresented in historical platforms of opportunity. In this paper, we discuss why fairness metrics need to be looked at under the lens of intersectionality, identify existing work in intersectional fairness, suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics to incorporate intersectionality, and finally conclude with the social, legal and political framework to handle intersectional fairness in the modern context.

Cite this Paper


BibTeX
@InProceedings{pmlr-v142-ghosh21a, title = {Characterizing Intersectional Group Fairness with Worst-Case Comparisons}, author = {Ghosh, Avijit and Genuit, Lea and Reagan, Mary}, booktitle = {Proceedings of 2nd Workshop on Diversity in Artificial Intelligence (AIDBEI)}, pages = {22--34}, year = {2021}, editor = {Lamba, Deepti and Hsu, William H.}, volume = {142}, series = {Proceedings of Machine Learning Research}, month = {09 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v142/ghosh21a/ghosh21a.pdf}, url = {https://proceedings.mlr.press/v142/ghosh21a.html}, abstract = {Machine Learning or Artificial Intelligence algorithms have gained considerable scrutiny in recent times owing to their propensity towards imitating and amplifying existing prejudices in society. This has led to a niche but growing body of work that identifies and attempts to fix these biases. A first step towards making these algorithms more fair is designing metrics that measure unfairness. Most existing work in this field deals with either a binary view of fairness (protected vs. unprotected groups) or politically defined categories (race or gender). Such categorization misses the important nuance of intersectionality - biases can often be amplified in subgroups that combine membership from different categories, especially if such a subgroup is particularly underrepresented in historical platforms of opportunity. In this paper, we discuss why fairness metrics need to be looked at under the lens of intersectionality, identify existing work in intersectional fairness, suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics to incorporate intersectionality, and finally conclude with the social, legal and political framework to handle intersectional fairness in the modern context.} }
Endnote
%0 Conference Paper %T Characterizing Intersectional Group Fairness with Worst-Case Comparisons %A Avijit Ghosh %A Lea Genuit %A Mary Reagan %B Proceedings of 2nd Workshop on Diversity in Artificial Intelligence (AIDBEI) %C Proceedings of Machine Learning Research %D 2021 %E Deepti Lamba %E William H. Hsu %F pmlr-v142-ghosh21a %I PMLR %P 22--34 %U https://proceedings.mlr.press/v142/ghosh21a.html %V 142 %X Machine Learning or Artificial Intelligence algorithms have gained considerable scrutiny in recent times owing to their propensity towards imitating and amplifying existing prejudices in society. This has led to a niche but growing body of work that identifies and attempts to fix these biases. A first step towards making these algorithms more fair is designing metrics that measure unfairness. Most existing work in this field deals with either a binary view of fairness (protected vs. unprotected groups) or politically defined categories (race or gender). Such categorization misses the important nuance of intersectionality - biases can often be amplified in subgroups that combine membership from different categories, especially if such a subgroup is particularly underrepresented in historical platforms of opportunity. In this paper, we discuss why fairness metrics need to be looked at under the lens of intersectionality, identify existing work in intersectional fairness, suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics to incorporate intersectionality, and finally conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
APA
Ghosh, A., Genuit, L. & Reagan, M.. (2021). Characterizing Intersectional Group Fairness with Worst-Case Comparisons. Proceedings of 2nd Workshop on Diversity in Artificial Intelligence (AIDBEI), in Proceedings of Machine Learning Research 142:22-34 Available from https://proceedings.mlr.press/v142/ghosh21a.html.

Related Material