Pulling back information geometry

Georgios Arvanitidis, Miguel González-Duque, Alison Pouplin, Dimitrios Kalatzis, Soren Hauberg
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:4872-4894, 2022.

Abstract

Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models. The existing theory, however, relies on the decoder being a Gaussian distribution as its simple reparametrization allows us to interpret the generating process as a random projection of a deterministic manifold. Consequently, this approach breaks down when applied to decoders that are not as easily reparametrized. We here propose to use the Fisher-Rao metric associated with the space of decoder distributions as a reference metric, which we pull back to the latent space. We show that we can achieve meaningful latent geometries for a wide range of decoder distributions for which the previous theory was not applicable, opening the door to ’black box’ latent geometries.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-arvanitidis22b, title = { Pulling back information geometry }, author = {Arvanitidis, Georgios and Gonz\'alez-Duque, Miguel and Pouplin, Alison and Kalatzis, Dimitrios and Hauberg, Soren}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {4872--4894}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/arvanitidis22b/arvanitidis22b.pdf}, url = {https://proceedings.mlr.press/v151/arvanitidis22b.html}, abstract = { Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models. The existing theory, however, relies on the decoder being a Gaussian distribution as its simple reparametrization allows us to interpret the generating process as a random projection of a deterministic manifold. Consequently, this approach breaks down when applied to decoders that are not as easily reparametrized. We here propose to use the Fisher-Rao metric associated with the space of decoder distributions as a reference metric, which we pull back to the latent space. We show that we can achieve meaningful latent geometries for a wide range of decoder distributions for which the previous theory was not applicable, opening the door to ’black box’ latent geometries. } }
Endnote
%0 Conference Paper %T Pulling back information geometry %A Georgios Arvanitidis %A Miguel González-Duque %A Alison Pouplin %A Dimitrios Kalatzis %A Soren Hauberg %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-arvanitidis22b %I PMLR %P 4872--4894 %U https://proceedings.mlr.press/v151/arvanitidis22b.html %V 151 %X Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models. The existing theory, however, relies on the decoder being a Gaussian distribution as its simple reparametrization allows us to interpret the generating process as a random projection of a deterministic manifold. Consequently, this approach breaks down when applied to decoders that are not as easily reparametrized. We here propose to use the Fisher-Rao metric associated with the space of decoder distributions as a reference metric, which we pull back to the latent space. We show that we can achieve meaningful latent geometries for a wide range of decoder distributions for which the previous theory was not applicable, opening the door to ’black box’ latent geometries.
APA
Arvanitidis, G., González-Duque, M., Pouplin, A., Kalatzis, D. & Hauberg, S.. (2022). Pulling back information geometry . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:4872-4894 Available from https://proceedings.mlr.press/v151/arvanitidis22b.html.

Related Material