Learning Controllable Fair Representations

Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, Stefano Ermon
; Proceedings of Machine Learning Research, PMLR 89:2164-2173, 2019.

Abstract

Learning data representations that are transferable and are fair with respect to certain protected attributes is crucial to reducing unfair decisions while preserving the utility of the data. We propose an information-theoretically motivated objective for learning maximally expressive representations subject to fairness constraints. We demonstrate that a range of existing approaches optimize approximations to the Lagrangian dual of our objective. In contrast to these existing approaches, our objective allows the user to control the fairness of the representations by specifying limits on unfairness. Exploiting duality, we introduce a method that optimizes the model parameters as well as the expressiveness-fairness trade-off. Empirical evidence suggests that our proposed method can balance the trade-off between multiple notions of fairness and achieves higher expressiveness at a lower computational cost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-song19a, title = {Learning Controllable Fair Representations}, author = {Song, Jiaming and Kalluri, Pratyusha and Grover, Aditya and Zhao, Shengjia and Ermon, Stefano}, booktitle = {Proceedings of Machine Learning Research}, pages = {2164--2173}, year = {2019}, editor = {Kamalika Chaudhuri and Masashi Sugiyama}, volume = {89}, series = {Proceedings of Machine Learning Research}, address = {}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/song19a/song19a.pdf}, url = {http://proceedings.mlr.press/v89/song19a.html}, abstract = {Learning data representations that are transferable and are fair with respect to certain protected attributes is crucial to reducing unfair decisions while preserving the utility of the data. We propose an information-theoretically motivated objective for learning maximally expressive representations subject to fairness constraints. We demonstrate that a range of existing approaches optimize approximations to the Lagrangian dual of our objective. In contrast to these existing approaches, our objective allows the user to control the fairness of the representations by specifying limits on unfairness. Exploiting duality, we introduce a method that optimizes the model parameters as well as the expressiveness-fairness trade-off. Empirical evidence suggests that our proposed method can balance the trade-off between multiple notions of fairness and achieves higher expressiveness at a lower computational cost.} }
Endnote
%0 Conference Paper %T Learning Controllable Fair Representations %A Jiaming Song %A Pratyusha Kalluri %A Aditya Grover %A Shengjia Zhao %A Stefano Ermon %B Proceedings of Machine Learning Research %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-song19a %I PMLR %J Proceedings of Machine Learning Research %P 2164--2173 %U http://proceedings.mlr.press %V 89 %W PMLR %X Learning data representations that are transferable and are fair with respect to certain protected attributes is crucial to reducing unfair decisions while preserving the utility of the data. We propose an information-theoretically motivated objective for learning maximally expressive representations subject to fairness constraints. We demonstrate that a range of existing approaches optimize approximations to the Lagrangian dual of our objective. In contrast to these existing approaches, our objective allows the user to control the fairness of the representations by specifying limits on unfairness. Exploiting duality, we introduce a method that optimizes the model parameters as well as the expressiveness-fairness trade-off. Empirical evidence suggests that our proposed method can balance the trade-off between multiple notions of fairness and achieves higher expressiveness at a lower computational cost.
APA
Song, J., Kalluri, P., Grover, A., Zhao, S. & Ermon, S.. (2019). Learning Controllable Fair Representations. Proceedings of Machine Learning Research, in PMLR 89:2164-2173

Related Material