Learning disconnected manifolds: a no GAN’s land

Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9418-9427, 2020.

Abstract

Typical architectures of Generative Adversarial Networks make use of a unimodal latent/input distribution transformed by a continuous generator. Consequently, the modeled distribution always has connected support which is cumbersome when learning a disconnected set of manifolds. We formalize this problem by establishing a "no free lunch" theorem for the disconnected manifold learning stating an upper-bound on the precision of the targeted distribution. This is done by building on the necessary existence of a low-quality region where the generator continuously samples data between two disconnected modes. Finally, we derive a rejection sampling method based on the norm of generator’s Jacobian and show its efficiency on several generators including BigGAN.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-tanielian20a, title = {Learning disconnected manifolds: a no {GAN}’s land}, author = {Tanielian, Ugo and Issenhuth, Thibaut and Dohmatob, Elvis and Mary, Jeremie}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9418--9427}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/tanielian20a/tanielian20a.pdf}, url = {https://proceedings.mlr.press/v119/tanielian20a.html}, abstract = {Typical architectures of Generative Adversarial Networks make use of a unimodal latent/input distribution transformed by a continuous generator. Consequently, the modeled distribution always has connected support which is cumbersome when learning a disconnected set of manifolds. We formalize this problem by establishing a "no free lunch" theorem for the disconnected manifold learning stating an upper-bound on the precision of the targeted distribution. This is done by building on the necessary existence of a low-quality region where the generator continuously samples data between two disconnected modes. Finally, we derive a rejection sampling method based on the norm of generator’s Jacobian and show its efficiency on several generators including BigGAN.} }
Endnote
%0 Conference Paper %T Learning disconnected manifolds: a no GAN’s land %A Ugo Tanielian %A Thibaut Issenhuth %A Elvis Dohmatob %A Jeremie Mary %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-tanielian20a %I PMLR %P 9418--9427 %U https://proceedings.mlr.press/v119/tanielian20a.html %V 119 %X Typical architectures of Generative Adversarial Networks make use of a unimodal latent/input distribution transformed by a continuous generator. Consequently, the modeled distribution always has connected support which is cumbersome when learning a disconnected set of manifolds. We formalize this problem by establishing a "no free lunch" theorem for the disconnected manifold learning stating an upper-bound on the precision of the targeted distribution. This is done by building on the necessary existence of a low-quality region where the generator continuously samples data between two disconnected modes. Finally, we derive a rejection sampling method based on the norm of generator’s Jacobian and show its efficiency on several generators including BigGAN.
APA
Tanielian, U., Issenhuth, T., Dohmatob, E. & Mary, J.. (2020). Learning disconnected manifolds: a no GAN’s land. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9418-9427 Available from https://proceedings.mlr.press/v119/tanielian20a.html.

Related Material