Stable and Fair Classification

Lingxiao Huang, Nisheeth Vishnoi
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2879-2890, 2019.

Abstract

In a recent study, Friedler et al. pointed out that several fair classification algorithms are not stable with respect to variations in the training set – a crucial consideration in several applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an extended framework based on fair classification algorithms that are formulated as optimization problems, by introducing a stability-focused regularization term. Theoretically, we prove an additional stability guarantee, that was lacking in fair classification algorithms, and also provide an accuracy guarantee for our extended framework. Our accuracy guarantee can be used to inform the selection of the regularization parameter in our framework. We assess the benefits of our approach empirically by extending several fair classification algorithms that are shown to achieve the best balance between fairness and accuracy over the \textbf{Adult} dataset. Our empirical results show that our extended framework indeed improves the stability at only a slight sacrifice in accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-huang19e, title = {Stable and Fair Classification}, author = {Huang, Lingxiao and Vishnoi, Nisheeth}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2879--2890}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/huang19e/huang19e.pdf}, url = {https://proceedings.mlr.press/v97/huang19e.html}, abstract = {In a recent study, Friedler et al. pointed out that several fair classification algorithms are not stable with respect to variations in the training set – a crucial consideration in several applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an extended framework based on fair classification algorithms that are formulated as optimization problems, by introducing a stability-focused regularization term. Theoretically, we prove an additional stability guarantee, that was lacking in fair classification algorithms, and also provide an accuracy guarantee for our extended framework. Our accuracy guarantee can be used to inform the selection of the regularization parameter in our framework. We assess the benefits of our approach empirically by extending several fair classification algorithms that are shown to achieve the best balance between fairness and accuracy over the \textbf{Adult} dataset. Our empirical results show that our extended framework indeed improves the stability at only a slight sacrifice in accuracy.} }
Endnote
%0 Conference Paper %T Stable and Fair Classification %A Lingxiao Huang %A Nisheeth Vishnoi %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-huang19e %I PMLR %P 2879--2890 %U https://proceedings.mlr.press/v97/huang19e.html %V 97 %X In a recent study, Friedler et al. pointed out that several fair classification algorithms are not stable with respect to variations in the training set – a crucial consideration in several applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an extended framework based on fair classification algorithms that are formulated as optimization problems, by introducing a stability-focused regularization term. Theoretically, we prove an additional stability guarantee, that was lacking in fair classification algorithms, and also provide an accuracy guarantee for our extended framework. Our accuracy guarantee can be used to inform the selection of the regularization parameter in our framework. We assess the benefits of our approach empirically by extending several fair classification algorithms that are shown to achieve the best balance between fairness and accuracy over the \textbf{Adult} dataset. Our empirical results show that our extended framework indeed improves the stability at only a slight sacrifice in accuracy.
APA
Huang, L. & Vishnoi, N.. (2019). Stable and Fair Classification. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2879-2890 Available from https://proceedings.mlr.press/v97/huang19e.html.

Related Material