Variational Autoencoders with Riemannian Brownian Motion Priors

Dimitrios Kalatzis, David Eklund, Georgios Arvanitidis, Soren Hauberg
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5053-5066, 2020.

Abstract

Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads to the common choice of a standard Gaussian prior over continuous latent variables. Recent work has, however, shown that this prior has a detrimental effect on model capacity, leading to subpar performance. We propose that the Euclidean assumption lies at the heart of this failure mode. To counter this, we assume a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replace the standard Gaussian prior with a Riemannian Brownian motion prior. We propose an efficient inference scheme that does not rely on the unknown normalizing factor of this prior. Finally, we demonstrate that this prior significantly increases model capacity using only one additional scalar parameter.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-kalatzis20a, title = {Variational Autoencoders with {R}iemannian Brownian Motion Priors}, author = {Kalatzis, Dimitrios and Eklund, David and Arvanitidis, Georgios and Hauberg, Soren}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5053--5066}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/kalatzis20a/kalatzis20a.pdf}, url = {http://proceedings.mlr.press/v119/kalatzis20a.html}, abstract = {Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads to the common choice of a standard Gaussian prior over continuous latent variables. Recent work has, however, shown that this prior has a detrimental effect on model capacity, leading to subpar performance. We propose that the Euclidean assumption lies at the heart of this failure mode. To counter this, we assume a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replace the standard Gaussian prior with a Riemannian Brownian motion prior. We propose an efficient inference scheme that does not rely on the unknown normalizing factor of this prior. Finally, we demonstrate that this prior significantly increases model capacity using only one additional scalar parameter.} }
Endnote
%0 Conference Paper %T Variational Autoencoders with Riemannian Brownian Motion Priors %A Dimitrios Kalatzis %A David Eklund %A Georgios Arvanitidis %A Soren Hauberg %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-kalatzis20a %I PMLR %P 5053--5066 %U http://proceedings.mlr.press/v119/kalatzis20a.html %V 119 %X Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads to the common choice of a standard Gaussian prior over continuous latent variables. Recent work has, however, shown that this prior has a detrimental effect on model capacity, leading to subpar performance. We propose that the Euclidean assumption lies at the heart of this failure mode. To counter this, we assume a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replace the standard Gaussian prior with a Riemannian Brownian motion prior. We propose an efficient inference scheme that does not rely on the unknown normalizing factor of this prior. Finally, we demonstrate that this prior significantly increases model capacity using only one additional scalar parameter.
APA
Kalatzis, D., Eklund, D., Arvanitidis, G. & Hauberg, S.. (2020). Variational Autoencoders with Riemannian Brownian Motion Priors. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5053-5066 Available from http://proceedings.mlr.press/v119/kalatzis20a.html.

Related Material