Learning Smooth and Fair Representations

Xavier Gitiaux, Huzefa Rangwala
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:253-261, 2021.

Abstract

This paper explores the statistical properties of fair representation learning, a pre-processing method that preemptively removes the correlations between features and sensitive attributes by mapping features to a fair representation space. We show that the demographic parity of a representation can be certified from a finite sample if and only if the mapping guarantees that the chi-squared mutual information between features and representations is finite for distributions of the features. Empirically, we find that smoothing representations with an additive Gaussian white noise provides generalization guarantees of fairness certificates, which improves upon existing fair representation learning approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-gitiaux21a, title = { Learning Smooth and Fair Representations }, author = {Gitiaux, Xavier and Rangwala, Huzefa}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {253--261}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/gitiaux21a/gitiaux21a.pdf}, url = {https://proceedings.mlr.press/v130/gitiaux21a.html}, abstract = { This paper explores the statistical properties of fair representation learning, a pre-processing method that preemptively removes the correlations between features and sensitive attributes by mapping features to a fair representation space. We show that the demographic parity of a representation can be certified from a finite sample if and only if the mapping guarantees that the chi-squared mutual information between features and representations is finite for distributions of the features. Empirically, we find that smoothing representations with an additive Gaussian white noise provides generalization guarantees of fairness certificates, which improves upon existing fair representation learning approaches. } }
Endnote
%0 Conference Paper %T Learning Smooth and Fair Representations %A Xavier Gitiaux %A Huzefa Rangwala %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-gitiaux21a %I PMLR %P 253--261 %U https://proceedings.mlr.press/v130/gitiaux21a.html %V 130 %X This paper explores the statistical properties of fair representation learning, a pre-processing method that preemptively removes the correlations between features and sensitive attributes by mapping features to a fair representation space. We show that the demographic parity of a representation can be certified from a finite sample if and only if the mapping guarantees that the chi-squared mutual information between features and representations is finite for distributions of the features. Empirically, we find that smoothing representations with an additive Gaussian white noise provides generalization guarantees of fairness certificates, which improves upon existing fair representation learning approaches.
APA
Gitiaux, X. & Rangwala, H.. (2021). Learning Smooth and Fair Representations . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:253-261 Available from https://proceedings.mlr.press/v130/gitiaux21a.html.

Related Material