Unveiling the Latent Space Geometry of Push-Forward Generative Models

Thibaut Issenhuth, Ugo Tanielian, Jeremie Mary, David Picard
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:14422-14444, 2023.

Abstract

Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space of such deep generative models. A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions. We investigate the relationship between the performance of these models and the geometry of their latent space. Building on recent developments in geometric measure theory, we prove a sufficient condition for optimality in the case where the dimension of the latent space is larger than the number of modes. Through experiments on GANs, we demonstrate the validity of our theoretical results and gain new insights into the latent space geometry of these models. Additionally, we propose a truncation method that enforces a simplicial cluster structure in the latent space and improves the performance of GANs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-issenhuth23a, title = {Unveiling the Latent Space Geometry of Push-Forward Generative Models}, author = {Issenhuth, Thibaut and Tanielian, Ugo and Mary, Jeremie and Picard, David}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {14422--14444}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/issenhuth23a/issenhuth23a.pdf}, url = {https://proceedings.mlr.press/v202/issenhuth23a.html}, abstract = {Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space of such deep generative models. A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions. We investigate the relationship between the performance of these models and the geometry of their latent space. Building on recent developments in geometric measure theory, we prove a sufficient condition for optimality in the case where the dimension of the latent space is larger than the number of modes. Through experiments on GANs, we demonstrate the validity of our theoretical results and gain new insights into the latent space geometry of these models. Additionally, we propose a truncation method that enforces a simplicial cluster structure in the latent space and improves the performance of GANs.} }
Endnote
%0 Conference Paper %T Unveiling the Latent Space Geometry of Push-Forward Generative Models %A Thibaut Issenhuth %A Ugo Tanielian %A Jeremie Mary %A David Picard %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-issenhuth23a %I PMLR %P 14422--14444 %U https://proceedings.mlr.press/v202/issenhuth23a.html %V 202 %X Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space of such deep generative models. A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions. We investigate the relationship between the performance of these models and the geometry of their latent space. Building on recent developments in geometric measure theory, we prove a sufficient condition for optimality in the case where the dimension of the latent space is larger than the number of modes. Through experiments on GANs, we demonstrate the validity of our theoretical results and gain new insights into the latent space geometry of these models. Additionally, we propose a truncation method that enforces a simplicial cluster structure in the latent space and improves the performance of GANs.
APA
Issenhuth, T., Tanielian, U., Mary, J. & Picard, D.. (2023). Unveiling the Latent Space Geometry of Push-Forward Generative Models. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:14422-14444 Available from https://proceedings.mlr.press/v202/issenhuth23a.html.

Related Material