Two Simple Ways to Learn Individual Fairness Metrics from Data

Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7097-7107, 2020.

Abstract

Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness. Despite its benefits, it depends on a task specific fair metric that encodes our intuition of what is fair and unfair for the ML task at hand, and the lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness. In this paper, we present two simple ways to learn fair metrics from a variety of data types. We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases. We also provide theoretical guarantees on the statistical performance of both approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-mukherjee20a, title = {Two Simple Ways to Learn Individual Fairness Metrics from Data}, author = {Mukherjee, Debarghya and Yurochkin, Mikhail and Banerjee, Moulinath and Sun, Yuekai}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7097--7107}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/mukherjee20a/mukherjee20a.pdf}, url = {https://proceedings.mlr.press/v119/mukherjee20a.html}, abstract = {Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness. Despite its benefits, it depends on a task specific fair metric that encodes our intuition of what is fair and unfair for the ML task at hand, and the lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness. In this paper, we present two simple ways to learn fair metrics from a variety of data types. We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases. We also provide theoretical guarantees on the statistical performance of both approaches.} }
Endnote
%0 Conference Paper %T Two Simple Ways to Learn Individual Fairness Metrics from Data %A Debarghya Mukherjee %A Mikhail Yurochkin %A Moulinath Banerjee %A Yuekai Sun %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-mukherjee20a %I PMLR %P 7097--7107 %U https://proceedings.mlr.press/v119/mukherjee20a.html %V 119 %X Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness. Despite its benefits, it depends on a task specific fair metric that encodes our intuition of what is fair and unfair for the ML task at hand, and the lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness. In this paper, we present two simple ways to learn fair metrics from a variety of data types. We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases. We also provide theoretical guarantees on the statistical performance of both approaches.
APA
Mukherjee, D., Yurochkin, M., Banerjee, M. & Sun, Y.. (2020). Two Simple Ways to Learn Individual Fairness Metrics from Data. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7097-7107 Available from https://proceedings.mlr.press/v119/mukherjee20a.html.

Related Material