Disentanglement and Generalization Under Correlation Shifts

Christina M. Funke, Paul Vicol, Kuan-chieh Wang, Matthias Kuemmerer, Richard Zemel, Matthias Bethge
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:116-141, 2022.

Abstract

Correlations between factors of variation are prevalent in real-world data. Exploiting such correlations may increase predictive performance on noisy data; however, often correlations are not robust (e.g., they may change between domains, datasets, or applications) and models that exploit them do not generalize when correlations shift. Disentanglement methods aim to learn representations which capture different factors of variation in latent subspaces. A common approach involves minimizing the mutual information between latent subspaces, such that each encodes a single underlying attribute. However, this fails when attributes are correlated. We solve this problem by enforcing independence between subspaces conditioned on the available attributes, which allows us to remove only dependencies that are not due to the correlation structure present in the training data. We achieve this via an adversarial approach to minimize the conditional mutual information (CMI) between subspaces with respect to categorical variables. We first show theoretically that CMI minimization is a good objective for robust disentanglement on linear problems. We then apply our method on real-world datasets based on MNIST and CelebA, and show that it yields models that are disentangled and robust under correlation shift, including in weakly supervised settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-funke22a, title = {Disentanglement and Generalization Under Correlation Shifts}, author = {Funke, Christina M. and Vicol, Paul and Wang, Kuan-chieh and Kuemmerer, Matthias and Zemel, Richard and Bethge, Matthias}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {116--141}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/funke22a/funke22a.pdf}, url = {https://proceedings.mlr.press/v199/funke22a.html}, abstract = {Correlations between factors of variation are prevalent in real-world data. Exploiting such correlations may increase predictive performance on noisy data; however, often correlations are not robust (e.g., they may change between domains, datasets, or applications) and models that exploit them do not generalize when correlations shift. Disentanglement methods aim to learn representations which capture different factors of variation in latent subspaces. A common approach involves minimizing the mutual information between latent subspaces, such that each encodes a single underlying attribute. However, this fails when attributes are correlated. We solve this problem by enforcing independence between subspaces conditioned on the available attributes, which allows us to remove only dependencies that are not due to the correlation structure present in the training data. We achieve this via an adversarial approach to minimize the conditional mutual information (CMI) between subspaces with respect to categorical variables. We first show theoretically that CMI minimization is a good objective for robust disentanglement on linear problems. We then apply our method on real-world datasets based on MNIST and CelebA, and show that it yields models that are disentangled and robust under correlation shift, including in weakly supervised settings.} }
Endnote
%0 Conference Paper %T Disentanglement and Generalization Under Correlation Shifts %A Christina M. Funke %A Paul Vicol %A Kuan-chieh Wang %A Matthias Kuemmerer %A Richard Zemel %A Matthias Bethge %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-funke22a %I PMLR %P 116--141 %U https://proceedings.mlr.press/v199/funke22a.html %V 199 %X Correlations between factors of variation are prevalent in real-world data. Exploiting such correlations may increase predictive performance on noisy data; however, often correlations are not robust (e.g., they may change between domains, datasets, or applications) and models that exploit them do not generalize when correlations shift. Disentanglement methods aim to learn representations which capture different factors of variation in latent subspaces. A common approach involves minimizing the mutual information between latent subspaces, such that each encodes a single underlying attribute. However, this fails when attributes are correlated. We solve this problem by enforcing independence between subspaces conditioned on the available attributes, which allows us to remove only dependencies that are not due to the correlation structure present in the training data. We achieve this via an adversarial approach to minimize the conditional mutual information (CMI) between subspaces with respect to categorical variables. We first show theoretically that CMI minimization is a good objective for robust disentanglement on linear problems. We then apply our method on real-world datasets based on MNIST and CelebA, and show that it yields models that are disentangled and robust under correlation shift, including in weakly supervised settings.
APA
Funke, C.M., Vicol, P., Wang, K., Kuemmerer, M., Zemel, R. & Bethge, M.. (2022). Disentanglement and Generalization Under Correlation Shifts. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:116-141 Available from https://proceedings.mlr.press/v199/funke22a.html.

Related Material