Learning Fair Representations

Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):325-333, 2013.

Abstract

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-zemel13, title = {Learning Fair Representations}, author = {Rich Zemel and Yu Wu and Kevin Swersky and Toni Pitassi and Cynthia Dwork}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {325--333}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/zemel13.pdf}, url = {http://proceedings.mlr.press/v28/zemel13.html}, abstract = {We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.} }
Endnote
%0 Conference Paper %T Learning Fair Representations %A Rich Zemel %A Yu Wu %A Kevin Swersky %A Toni Pitassi %A Cynthia Dwork %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-zemel13 %I PMLR %J Proceedings of Machine Learning Research %P 325--333 %U http://proceedings.mlr.press %V 28 %N 3 %W PMLR %X We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.
RIS
TY - CPAPER TI - Learning Fair Representations AU - Rich Zemel AU - Yu Wu AU - Kevin Swersky AU - Toni Pitassi AU - Cynthia Dwork BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-zemel13 PB - PMLR SP - 325 DP - PMLR EP - 333 L1 - http://proceedings.mlr.press/v28/zemel13.pdf UR - http://proceedings.mlr.press/v28/zemel13.html AB - We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification. ER -
APA
Zemel, R., Wu, Y., Swersky, K., Pitassi, T. & Dwork, C.. (2013). Learning Fair Representations. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(3):325-333

Related Material