The Usual Suspects? Reassessing Blame for VAE Posterior Collapse

Bin Dai, Ziyu Wang, David Wipf
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2313-2322, 2020.

Abstract

In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-dai20c, title = {The Usual Suspects? {R}eassessing Blame for {VAE} Posterior Collapse}, author = {Dai, Bin and Wang, Ziyu and Wipf, David}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2313--2322}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/dai20c/dai20c.pdf}, url = {https://proceedings.mlr.press/v119/dai20c.html}, abstract = {In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.} }
Endnote
%0 Conference Paper %T The Usual Suspects? Reassessing Blame for VAE Posterior Collapse %A Bin Dai %A Ziyu Wang %A David Wipf %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-dai20c %I PMLR %P 2313--2322 %U https://proceedings.mlr.press/v119/dai20c.html %V 119 %X In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.
APA
Dai, B., Wang, Z. & Wipf, D.. (2020). The Usual Suspects? Reassessing Blame for VAE Posterior Collapse. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2313-2322 Available from https://proceedings.mlr.press/v119/dai20c.html.

Related Material