Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees

L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1349-1361, 2021.

Abstract

We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes. Compared to prior work, our framework can be employed with a very general class of linear and linear-fractional fairness constraints, can handle multiple, non-binary protected attributes, and outputs a classifier that comes with provable guarantees on both accuracy and fairness. Empirically, we show that our framework can be used to attain either statistical rate or false positive rate fairness guarantees with a minimal loss in accuracy, even when the noise is large, in two real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-celis21a, title = {Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees}, author = {Celis, L. Elisa and Huang, Lingxiao and Keswani, Vijay and Vishnoi, Nisheeth K.}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1349--1361}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/celis21a/celis21a.pdf}, url = {https://proceedings.mlr.press/v139/celis21a.html}, abstract = {We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes. Compared to prior work, our framework can be employed with a very general class of linear and linear-fractional fairness constraints, can handle multiple, non-binary protected attributes, and outputs a classifier that comes with provable guarantees on both accuracy and fairness. Empirically, we show that our framework can be used to attain either statistical rate or false positive rate fairness guarantees with a minimal loss in accuracy, even when the noise is large, in two real-world datasets.} }
Endnote
%0 Conference Paper %T Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees %A L. Elisa Celis %A Lingxiao Huang %A Vijay Keswani %A Nisheeth K. Vishnoi %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-celis21a %I PMLR %P 1349--1361 %U https://proceedings.mlr.press/v139/celis21a.html %V 139 %X We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes. Compared to prior work, our framework can be employed with a very general class of linear and linear-fractional fairness constraints, can handle multiple, non-binary protected attributes, and outputs a classifier that comes with provable guarantees on both accuracy and fairness. Empirically, we show that our framework can be used to attain either statistical rate or false positive rate fairness guarantees with a minimal loss in accuracy, even when the noise is large, in two real-world datasets.
APA
Celis, L.E., Huang, L., Keswani, V. & Vishnoi, N.K.. (2021). Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1349-1361 Available from https://proceedings.mlr.press/v139/celis21a.html.

Related Material