Weakly-Supervised Disentanglement Without Compromises

Francesco Locatello, Ben Poole, Gunnar Raetsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6348-6359, 2020.

Abstract

Intelligent agents should be able to learn useful representations by observing changes in their environment. We model such observations as pairs of non-i.i.d. images sharing at least one of the underlying factors of variation. First, we theoretically show that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations. Second, we provide practical algorithms that learn disentangled representations from pairs of images without requiring annotation of groups, individual factors, or the number of factors that have changed. Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets. Finally, we evaluate our learned representations and find that they are simultaneously useful on a diverse suite of tasks, including generalization under covariate shifts, fairness, and abstract reasoning. Overall, our results demonstrate that weak supervision enables learning of useful disentangled representations in realistic scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-locatello20a, title = {Weakly-Supervised Disentanglement Without Compromises}, author = {Locatello, Francesco and Poole, Ben and Raetsch, Gunnar and Sch{\"o}lkopf, Bernhard and Bachem, Olivier and Tschannen, Michael}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6348--6359}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/locatello20a/locatello20a.pdf}, url = {https://proceedings.mlr.press/v119/locatello20a.html}, abstract = {Intelligent agents should be able to learn useful representations by observing changes in their environment. We model such observations as pairs of non-i.i.d. images sharing at least one of the underlying factors of variation. First, we theoretically show that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations. Second, we provide practical algorithms that learn disentangled representations from pairs of images without requiring annotation of groups, individual factors, or the number of factors that have changed. Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets. Finally, we evaluate our learned representations and find that they are simultaneously useful on a diverse suite of tasks, including generalization under covariate shifts, fairness, and abstract reasoning. Overall, our results demonstrate that weak supervision enables learning of useful disentangled representations in realistic scenarios.} }
Endnote
%0 Conference Paper %T Weakly-Supervised Disentanglement Without Compromises %A Francesco Locatello %A Ben Poole %A Gunnar Raetsch %A Bernhard Schölkopf %A Olivier Bachem %A Michael Tschannen %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-locatello20a %I PMLR %P 6348--6359 %U https://proceedings.mlr.press/v119/locatello20a.html %V 119 %X Intelligent agents should be able to learn useful representations by observing changes in their environment. We model such observations as pairs of non-i.i.d. images sharing at least one of the underlying factors of variation. First, we theoretically show that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations. Second, we provide practical algorithms that learn disentangled representations from pairs of images without requiring annotation of groups, individual factors, or the number of factors that have changed. Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets. Finally, we evaluate our learned representations and find that they are simultaneously useful on a diverse suite of tasks, including generalization under covariate shifts, fairness, and abstract reasoning. Overall, our results demonstrate that weak supervision enables learning of useful disentangled representations in realistic scenarios.
APA
Locatello, F., Poole, B., Raetsch, G., Schölkopf, B., Bachem, O. & Tschannen, M.. (2020). Weakly-Supervised Disentanglement Without Compromises. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6348-6359 Available from https://proceedings.mlr.press/v119/locatello20a.html.

Related Material