Learning Flat Latent Manifolds with VAEs

Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick Van Der Smagt
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1587-1596, 2020.

Abstract

Measuring the similarity between data points often requires domain knowledge, which can in parts be compensated by relying on unsupervised methods such as latent-variable models, where similarity/distance is estimated in a more compact latent space. Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry. We propose an extension to the framework of variational auto-encoders allows learning flat latent manifolds, where the Euclidean metric is a proxy for the similarity between data points. This is achieved by defining the latent space as a Riemannian manifold and by regularising the metric tensor to be a scaled identity matrix. Additionally, we replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one—and formulate the learning problem as a constrained optimisation problem. We evaluate our method on a range of data-sets, including a video-tracking benchmark, where the performance of our unsupervised approach nears that of state-of-the-art supervised approaches, while retaining the computational efficiency of straight-line-based approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-chen20i, title = {Learning Flat Latent Manifolds with {VAE}s}, author = {Chen, Nutan and Klushyn, Alexej and Ferroni, Francesco and Bayer, Justin and Van Der Smagt, Patrick}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1587--1596}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/chen20i/chen20i.pdf}, url = {https://proceedings.mlr.press/v119/chen20i.html}, abstract = {Measuring the similarity between data points often requires domain knowledge, which can in parts be compensated by relying on unsupervised methods such as latent-variable models, where similarity/distance is estimated in a more compact latent space. Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry. We propose an extension to the framework of variational auto-encoders allows learning flat latent manifolds, where the Euclidean metric is a proxy for the similarity between data points. This is achieved by defining the latent space as a Riemannian manifold and by regularising the metric tensor to be a scaled identity matrix. Additionally, we replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one—and formulate the learning problem as a constrained optimisation problem. We evaluate our method on a range of data-sets, including a video-tracking benchmark, where the performance of our unsupervised approach nears that of state-of-the-art supervised approaches, while retaining the computational efficiency of straight-line-based approaches.} }
Endnote
%0 Conference Paper %T Learning Flat Latent Manifolds with VAEs %A Nutan Chen %A Alexej Klushyn %A Francesco Ferroni %A Justin Bayer %A Patrick Van Der Smagt %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-chen20i %I PMLR %P 1587--1596 %U https://proceedings.mlr.press/v119/chen20i.html %V 119 %X Measuring the similarity between data points often requires domain knowledge, which can in parts be compensated by relying on unsupervised methods such as latent-variable models, where similarity/distance is estimated in a more compact latent space. Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry. We propose an extension to the framework of variational auto-encoders allows learning flat latent manifolds, where the Euclidean metric is a proxy for the similarity between data points. This is achieved by defining the latent space as a Riemannian manifold and by regularising the metric tensor to be a scaled identity matrix. Additionally, we replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one—and formulate the learning problem as a constrained optimisation problem. We evaluate our method on a range of data-sets, including a video-tracking benchmark, where the performance of our unsupervised approach nears that of state-of-the-art supervised approaches, while retaining the computational efficiency of straight-line-based approaches.
APA
Chen, N., Klushyn, A., Ferroni, F., Bayer, J. & Van Der Smagt, P.. (2020). Learning Flat Latent Manifolds with VAEs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1587-1596 Available from https://proceedings.mlr.press/v119/chen20i.html.

Related Material