Learning Generative Models across Incomparable Spaces

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:851-861, 2019.

Abstract

Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-bunne19a, title = {Learning Generative Models across Incomparable Spaces}, author = {Bunne, Charlotte and Alvarez-Melis, David and Krause, Andreas and Jegelka, Stefanie}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {851--861}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/bunne19a/bunne19a.pdf}, url = {https://proceedings.mlr.press/v97/bunne19a.html}, abstract = {Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.} }
Endnote
%0 Conference Paper %T Learning Generative Models across Incomparable Spaces %A Charlotte Bunne %A David Alvarez-Melis %A Andreas Krause %A Stefanie Jegelka %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-bunne19a %I PMLR %P 851--861 %U https://proceedings.mlr.press/v97/bunne19a.html %V 97 %X Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.
APA
Bunne, C., Alvarez-Melis, D., Krause, A. & Jegelka, S.. (2019). Learning Generative Models across Incomparable Spaces. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:851-861 Available from https://proceedings.mlr.press/v97/bunne19a.html.

Related Material