Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models

Anton Mallasto, Søren Hauberg, Aasa Feragen
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:2368-2377, 2019.

Abstract

Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an ambient Euclidean space. In a number of applications, a priori known spatial constraints can shrink the ambient space into a considerably smaller manifold. Additionally, in these applications the Euclidean geometry might induce a suboptimal similarity measure, which could be improved by choosing a different metric. Euclidean models ignore such information and assign probability mass to data points that can never appear as data, and vastly different likelihoods to points that are similar under the desired metric.We propose the wrapped Gaussian process latent variable model (WGPLVM), that extends Gaussian process latent variable models to take values strictly on a given Riemannian manifold, making the model blind to impossible data points. This allows non-linear, probabilistic inference of low-dimensional Riemannian submanifolds from data. Our evaluation on diverse datasets show that we improve performance on several tasks, including encoding, visualization and uncertainty quantification.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-mallasto19a, title = {Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models}, author = {Mallasto, Anton and Hauberg, S{\o}ren and Feragen, Aasa}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {2368--2377}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/mallasto19a/mallasto19a.pdf}, url = {https://proceedings.mlr.press/v89/mallasto19a.html}, abstract = {Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an ambient Euclidean space. In a number of applications, a priori known spatial constraints can shrink the ambient space into a considerably smaller manifold. Additionally, in these applications the Euclidean geometry might induce a suboptimal similarity measure, which could be improved by choosing a different metric. Euclidean models ignore such information and assign probability mass to data points that can never appear as data, and vastly different likelihoods to points that are similar under the desired metric.We propose the wrapped Gaussian process latent variable model (WGPLVM), that extends Gaussian process latent variable models to take values strictly on a given Riemannian manifold, making the model blind to impossible data points. This allows non-linear, probabilistic inference of low-dimensional Riemannian submanifolds from data. Our evaluation on diverse datasets show that we improve performance on several tasks, including encoding, visualization and uncertainty quantification.} }
Endnote
%0 Conference Paper %T Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models %A Anton Mallasto %A Søren Hauberg %A Aasa Feragen %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-mallasto19a %I PMLR %P 2368--2377 %U https://proceedings.mlr.press/v89/mallasto19a.html %V 89 %X Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an ambient Euclidean space. In a number of applications, a priori known spatial constraints can shrink the ambient space into a considerably smaller manifold. Additionally, in these applications the Euclidean geometry might induce a suboptimal similarity measure, which could be improved by choosing a different metric. Euclidean models ignore such information and assign probability mass to data points that can never appear as data, and vastly different likelihoods to points that are similar under the desired metric.We propose the wrapped Gaussian process latent variable model (WGPLVM), that extends Gaussian process latent variable models to take values strictly on a given Riemannian manifold, making the model blind to impossible data points. This allows non-linear, probabilistic inference of low-dimensional Riemannian submanifolds from data. Our evaluation on diverse datasets show that we improve performance on several tasks, including encoding, visualization and uncertainty quantification.
APA
Mallasto, A., Hauberg, S. & Feragen, A.. (2019). Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:2368-2377 Available from https://proceedings.mlr.press/v89/mallasto19a.html.

Related Material