Riemann$^2$: Learning Riemannian Submanifolds from Riemannian Data

Leonel Rozo, Miguel González-Duque, Noémie Jaquier, Søren Hauberg
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3097-3105, 2025.

Abstract

Latent variable models are powerful tools for learning low-dimensional manifolds from high-dimensional data. However, when dealing with constrained data such as unit-norm vectors or symmetric positive-definite matrices, existing approaches ignore the underlying geometric constraints or fail to provide meaningful metrics in the latent space. To address these limitations, we propose to learn Riemannian latent representations of such geometric data. To do so, we estimate the pullback metric induced by a Wrapped Gaussian Process Latent Variable Model, which explicitly accounts for the data geometry. This enables us to define geometry-aware notions of distance and shortest paths in the latent space, while ensuring that our model only assigns probability mass to the data manifold. This generalizes previous work and allows us to handle complex tasks in various domains, including robot motion synthesis and analysis of brain connectomes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-rozo25a, title = {Riemann$^2$: Learning Riemannian Submanifolds from Riemannian Data}, author = {Rozo, Leonel and Gonz{\'a}lez-Duque, Miguel and Jaquier, No{\'e}mie and Hauberg, S{\o}ren}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3097--3105}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/rozo25a/rozo25a.pdf}, url = {https://proceedings.mlr.press/v258/rozo25a.html}, abstract = {Latent variable models are powerful tools for learning low-dimensional manifolds from high-dimensional data. However, when dealing with constrained data such as unit-norm vectors or symmetric positive-definite matrices, existing approaches ignore the underlying geometric constraints or fail to provide meaningful metrics in the latent space. To address these limitations, we propose to learn Riemannian latent representations of such geometric data. To do so, we estimate the pullback metric induced by a Wrapped Gaussian Process Latent Variable Model, which explicitly accounts for the data geometry. This enables us to define geometry-aware notions of distance and shortest paths in the latent space, while ensuring that our model only assigns probability mass to the data manifold. This generalizes previous work and allows us to handle complex tasks in various domains, including robot motion synthesis and analysis of brain connectomes.} }
Endnote
%0 Conference Paper %T Riemann$^2$: Learning Riemannian Submanifolds from Riemannian Data %A Leonel Rozo %A Miguel González-Duque %A Noémie Jaquier %A Søren Hauberg %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-rozo25a %I PMLR %P 3097--3105 %U https://proceedings.mlr.press/v258/rozo25a.html %V 258 %X Latent variable models are powerful tools for learning low-dimensional manifolds from high-dimensional data. However, when dealing with constrained data such as unit-norm vectors or symmetric positive-definite matrices, existing approaches ignore the underlying geometric constraints or fail to provide meaningful metrics in the latent space. To address these limitations, we propose to learn Riemannian latent representations of such geometric data. To do so, we estimate the pullback metric induced by a Wrapped Gaussian Process Latent Variable Model, which explicitly accounts for the data geometry. This enables us to define geometry-aware notions of distance and shortest paths in the latent space, while ensuring that our model only assigns probability mass to the data manifold. This generalizes previous work and allows us to handle complex tasks in various domains, including robot motion synthesis and analysis of brain connectomes.
APA
Rozo, L., González-Duque, M., Jaquier, N. & Hauberg, S.. (2025). Riemann$^2$: Learning Riemannian Submanifolds from Riemannian Data. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3097-3105 Available from https://proceedings.mlr.press/v258/rozo25a.html.

Related Material