Superhuman Fairness

Omid Memarrast, Linh Vu, Brian D Ziebart
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:24420-24435, 2023.

Abstract

The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness metric(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead re-cast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-memarrast23a, title = {Superhuman Fairness}, author = {Memarrast, Omid and Vu, Linh and Ziebart, Brian D}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {24420--24435}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/memarrast23a/memarrast23a.pdf}, url = {https://proceedings.mlr.press/v202/memarrast23a.html}, abstract = {The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness metric(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead re-cast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.} }
Endnote
%0 Conference Paper %T Superhuman Fairness %A Omid Memarrast %A Linh Vu %A Brian D Ziebart %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-memarrast23a %I PMLR %P 24420--24435 %U https://proceedings.mlr.press/v202/memarrast23a.html %V 202 %X The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness metric(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead re-cast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.
APA
Memarrast, O., Vu, L. & Ziebart, B.D.. (2023). Superhuman Fairness. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:24420-24435 Available from https://proceedings.mlr.press/v202/memarrast23a.html.

Related Material