Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation

Yeonsung Jung, Hajin Shim, June Yong Yang, Eunho Yang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15435-15450, 2023.

Abstract

Deep neural networks (DNNs), despite their ability to generalize with over-capacity networks, often rely heavily on the malignant bias as shortcuts instead of task-related information for discriminative tasks. This can lead to poor performance on real-world inputs, particularly when the majority of the sample is biased. To address the highly biased issue, recent studies either exploit auxiliary information which is rarely obtainable in practice or sift handful bias-free samples to emphasize them for debiasing. However, these methods are not always guaranteed to work due to unmet presumptions. In this paper, we propose Contrastive Debiasing via Generative Bias-transformation (CDvG) which is capable of operating without explicitly exploiting bias labels and bias-free samples. Motivated by our observation that not only discriminative models but also image translation models tend to focus on the malignant bias, CDvG employs an image translation model to transform the bias to another mode of bias while preserving task-relevant information. Through contrastive learning, the bias-transformed views are set against each other to learn bias-invariant representations. Our method shows a better debiasing effect when bias is more malignant as opposed to previous methods, and can also be integrated with the methods that focus on bias-free samples in a plug-and-play manner for further improvement. Experimental results on diverse datasets demonstrate that the proposed method outperforms the state-of-the-art, especially when bias-free samples are extremely scarce or absent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-jung23b, title = {Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation}, author = {Jung, Yeonsung and Shim, Hajin and Yang, June Yong and Yang, Eunho}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {15435--15450}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/jung23b/jung23b.pdf}, url = {https://proceedings.mlr.press/v202/jung23b.html}, abstract = {Deep neural networks (DNNs), despite their ability to generalize with over-capacity networks, often rely heavily on the malignant bias as shortcuts instead of task-related information for discriminative tasks. This can lead to poor performance on real-world inputs, particularly when the majority of the sample is biased. To address the highly biased issue, recent studies either exploit auxiliary information which is rarely obtainable in practice or sift handful bias-free samples to emphasize them for debiasing. However, these methods are not always guaranteed to work due to unmet presumptions. In this paper, we propose Contrastive Debiasing via Generative Bias-transformation (CDvG) which is capable of operating without explicitly exploiting bias labels and bias-free samples. Motivated by our observation that not only discriminative models but also image translation models tend to focus on the malignant bias, CDvG employs an image translation model to transform the bias to another mode of bias while preserving task-relevant information. Through contrastive learning, the bias-transformed views are set against each other to learn bias-invariant representations. Our method shows a better debiasing effect when bias is more malignant as opposed to previous methods, and can also be integrated with the methods that focus on bias-free samples in a plug-and-play manner for further improvement. Experimental results on diverse datasets demonstrate that the proposed method outperforms the state-of-the-art, especially when bias-free samples are extremely scarce or absent.} }
Endnote
%0 Conference Paper %T Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation %A Yeonsung Jung %A Hajin Shim %A June Yong Yang %A Eunho Yang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-jung23b %I PMLR %P 15435--15450 %U https://proceedings.mlr.press/v202/jung23b.html %V 202 %X Deep neural networks (DNNs), despite their ability to generalize with over-capacity networks, often rely heavily on the malignant bias as shortcuts instead of task-related information for discriminative tasks. This can lead to poor performance on real-world inputs, particularly when the majority of the sample is biased. To address the highly biased issue, recent studies either exploit auxiliary information which is rarely obtainable in practice or sift handful bias-free samples to emphasize them for debiasing. However, these methods are not always guaranteed to work due to unmet presumptions. In this paper, we propose Contrastive Debiasing via Generative Bias-transformation (CDvG) which is capable of operating without explicitly exploiting bias labels and bias-free samples. Motivated by our observation that not only discriminative models but also image translation models tend to focus on the malignant bias, CDvG employs an image translation model to transform the bias to another mode of bias while preserving task-relevant information. Through contrastive learning, the bias-transformed views are set against each other to learn bias-invariant representations. Our method shows a better debiasing effect when bias is more malignant as opposed to previous methods, and can also be integrated with the methods that focus on bias-free samples in a plug-and-play manner for further improvement. Experimental results on diverse datasets demonstrate that the proposed method outperforms the state-of-the-art, especially when bias-free samples are extremely scarce or absent.
APA
Jung, Y., Shim, H., Yang, J.Y. & Yang, E.. (2023). Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:15435-15450 Available from https://proceedings.mlr.press/v202/jung23b.html.

Related Material