Controlling Posterior Collapse by an Inverse Lipschitz Constraint on the Decoder Network

Yuri Kinoshita, Kenta Oono, Kenji Fukumizu, Yuichi Yoshida, Shin-Ichi Maeda
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:17041-17060, 2023.

Abstract

Variational autoencoders (VAEs) are one of the deep generative models that have experienced enormous success over the past decades. However, in practice, they suffer from a problem called posterior collapse, which occurs when the posterior distribution coincides, or collapses, with the prior taking no information from the latent structure of the input data into consideration. In this work, we introduce an inverse Lipschitz neural network into the decoder and, based on this architecture, provide a new method that can control in a simple and clear manner the degree of posterior collapse for a wide range of VAE models equipped with a concrete theoretical guarantee. We also illustrate the effectiveness of our method through several numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kinoshita23a, title = {Controlling Posterior Collapse by an Inverse {L}ipschitz Constraint on the Decoder Network}, author = {Kinoshita, Yuri and Oono, Kenta and Fukumizu, Kenji and Yoshida, Yuichi and Maeda, Shin-Ichi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {17041--17060}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kinoshita23a/kinoshita23a.pdf}, url = {https://proceedings.mlr.press/v202/kinoshita23a.html}, abstract = {Variational autoencoders (VAEs) are one of the deep generative models that have experienced enormous success over the past decades. However, in practice, they suffer from a problem called posterior collapse, which occurs when the posterior distribution coincides, or collapses, with the prior taking no information from the latent structure of the input data into consideration. In this work, we introduce an inverse Lipschitz neural network into the decoder and, based on this architecture, provide a new method that can control in a simple and clear manner the degree of posterior collapse for a wide range of VAE models equipped with a concrete theoretical guarantee. We also illustrate the effectiveness of our method through several numerical experiments.} }
Endnote
%0 Conference Paper %T Controlling Posterior Collapse by an Inverse Lipschitz Constraint on the Decoder Network %A Yuri Kinoshita %A Kenta Oono %A Kenji Fukumizu %A Yuichi Yoshida %A Shin-Ichi Maeda %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kinoshita23a %I PMLR %P 17041--17060 %U https://proceedings.mlr.press/v202/kinoshita23a.html %V 202 %X Variational autoencoders (VAEs) are one of the deep generative models that have experienced enormous success over the past decades. However, in practice, they suffer from a problem called posterior collapse, which occurs when the posterior distribution coincides, or collapses, with the prior taking no information from the latent structure of the input data into consideration. In this work, we introduce an inverse Lipschitz neural network into the decoder and, based on this architecture, provide a new method that can control in a simple and clear manner the degree of posterior collapse for a wide range of VAE models equipped with a concrete theoretical guarantee. We also illustrate the effectiveness of our method through several numerical experiments.
APA
Kinoshita, Y., Oono, K., Fukumizu, K., Yoshida, Y. & Maeda, S.. (2023). Controlling Posterior Collapse by an Inverse Lipschitz Constraint on the Decoder Network. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:17041-17060 Available from https://proceedings.mlr.press/v202/kinoshita23a.html.

Related Material