Wasserstein Fair Classification

Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, Silvia Chiappa
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:862-872, 2020.

Abstract

We propose an approach to fair classification that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances. The approach has desirable theoretical properties and is robust to specific choices of the threshold used to obtain class predictions from model outputs.We introduce different methods that enable hid-ing sensitive information at test time or have a simple and fast implementation. We show empirical performance against different fair-ness baselines on several benchmark fairness datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-jiang20a, title = {Wasserstein Fair Classification}, author = {Jiang, Ray and Pacchiano, Aldo and Stepleton, Tom and Jiang, Heinrich and Chiappa, Silvia}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {862--872}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/jiang20a/jiang20a.pdf}, url = {https://proceedings.mlr.press/v115/jiang20a.html}, abstract = {We propose an approach to fair classification that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances. The approach has desirable theoretical properties and is robust to specific choices of the threshold used to obtain class predictions from model outputs.We introduce different methods that enable hid-ing sensitive information at test time or have a simple and fast implementation. We show empirical performance against different fair-ness baselines on several benchmark fairness datasets.} }
Endnote
%0 Conference Paper %T Wasserstein Fair Classification %A Ray Jiang %A Aldo Pacchiano %A Tom Stepleton %A Heinrich Jiang %A Silvia Chiappa %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-jiang20a %I PMLR %P 862--872 %U https://proceedings.mlr.press/v115/jiang20a.html %V 115 %X We propose an approach to fair classification that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances. The approach has desirable theoretical properties and is robust to specific choices of the threshold used to obtain class predictions from model outputs.We introduce different methods that enable hid-ing sensitive information at test time or have a simple and fast implementation. We show empirical performance against different fair-ness baselines on several benchmark fairness datasets.
APA
Jiang, R., Pacchiano, A., Stepleton, T., Jiang, H. & Chiappa, S.. (2020). Wasserstein Fair Classification. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:862-872 Available from https://proceedings.mlr.press/v115/jiang20a.html.

Related Material