Linear Adversarial Concept Erasure

Shauli Ravfogel, Michael Twiton, Yoav Goldberg, Ryan D Cotterell
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18400-18421, 2022.

Abstract

Modern neural models trained on textual data rely on pre-trained representations that emerge without direct supervision. As these representations are increasingly being used in real-world applications, the inability to control their content becomes an increasingly important problem. In this work, we formulate the problem of identifying a linear subspace that corresponds to a given concept, and removing it from the representation. We formulate this problem as a constrained, linear minimax game, and show that existing solutions are generally not optimal for this task. We derive a closed-form solution for certain objectives, and propose a convex relaxation that works well for others. When evaluated in the context of binary gender removal, the method recovers a low-dimensional subspace whose removal mitigates bias by intrinsic and extrinsic evaluation. Surprisingly, we show that the method—despite being linear—is highly expressive, effectively mitigating bias in the output layers of deep, nonlinear classifiers while maintaining tractability and interpretability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ravfogel22a, title = {Linear Adversarial Concept Erasure}, author = {Ravfogel, Shauli and Twiton, Michael and Goldberg, Yoav and Cotterell, Ryan D}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18400--18421}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ravfogel22a/ravfogel22a.pdf}, url = {https://proceedings.mlr.press/v162/ravfogel22a.html}, abstract = {Modern neural models trained on textual data rely on pre-trained representations that emerge without direct supervision. As these representations are increasingly being used in real-world applications, the inability to control their content becomes an increasingly important problem. In this work, we formulate the problem of identifying a linear subspace that corresponds to a given concept, and removing it from the representation. We formulate this problem as a constrained, linear minimax game, and show that existing solutions are generally not optimal for this task. We derive a closed-form solution for certain objectives, and propose a convex relaxation that works well for others. When evaluated in the context of binary gender removal, the method recovers a low-dimensional subspace whose removal mitigates bias by intrinsic and extrinsic evaluation. Surprisingly, we show that the method—despite being linear—is highly expressive, effectively mitigating bias in the output layers of deep, nonlinear classifiers while maintaining tractability and interpretability.} }
Endnote
%0 Conference Paper %T Linear Adversarial Concept Erasure %A Shauli Ravfogel %A Michael Twiton %A Yoav Goldberg %A Ryan D Cotterell %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ravfogel22a %I PMLR %P 18400--18421 %U https://proceedings.mlr.press/v162/ravfogel22a.html %V 162 %X Modern neural models trained on textual data rely on pre-trained representations that emerge without direct supervision. As these representations are increasingly being used in real-world applications, the inability to control their content becomes an increasingly important problem. In this work, we formulate the problem of identifying a linear subspace that corresponds to a given concept, and removing it from the representation. We formulate this problem as a constrained, linear minimax game, and show that existing solutions are generally not optimal for this task. We derive a closed-form solution for certain objectives, and propose a convex relaxation that works well for others. When evaluated in the context of binary gender removal, the method recovers a low-dimensional subspace whose removal mitigates bias by intrinsic and extrinsic evaluation. Surprisingly, we show that the method—despite being linear—is highly expressive, effectively mitigating bias in the output layers of deep, nonlinear classifiers while maintaining tractability and interpretability.
APA
Ravfogel, S., Twiton, M., Goldberg, Y. & Cotterell, R.D.. (2022). Linear Adversarial Concept Erasure. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18400-18421 Available from https://proceedings.mlr.press/v162/ravfogel22a.html.

Related Material