Symmetric Equilibrium Learning of VAEs

Boris Flach, Dmitrij Schlesinger, Alexander Shekhovtsov
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3214-3222, 2024.

Abstract

We view variational autoencoders (VAE) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa. The standard learning approach for VAEs is the maximisation of the evidence lower bound (ELBO). It is asymmetric in that it aims at learning a latent variable model while using the encoder as an auxiliary means only. Moreover, it requires a closed form a-priori latent distribution. This limits its applicability in more complex scenarios, such as general semi-supervised learning and employing complex generative models as priors. We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling. The flexibility and simplicity of this approach allows its application to a wide range of learning scenarios and downstream tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-flach24a, title = {Symmetric Equilibrium Learning of {VAE}s}, author = {Flach, Boris and Schlesinger, Dmitrij and Shekhovtsov, Alexander}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3214--3222}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/flach24a/flach24a.pdf}, url = {https://proceedings.mlr.press/v238/flach24a.html}, abstract = {We view variational autoencoders (VAE) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa. The standard learning approach for VAEs is the maximisation of the evidence lower bound (ELBO). It is asymmetric in that it aims at learning a latent variable model while using the encoder as an auxiliary means only. Moreover, it requires a closed form a-priori latent distribution. This limits its applicability in more complex scenarios, such as general semi-supervised learning and employing complex generative models as priors. We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling. The flexibility and simplicity of this approach allows its application to a wide range of learning scenarios and downstream tasks.} }
Endnote
%0 Conference Paper %T Symmetric Equilibrium Learning of VAEs %A Boris Flach %A Dmitrij Schlesinger %A Alexander Shekhovtsov %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-flach24a %I PMLR %P 3214--3222 %U https://proceedings.mlr.press/v238/flach24a.html %V 238 %X We view variational autoencoders (VAE) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa. The standard learning approach for VAEs is the maximisation of the evidence lower bound (ELBO). It is asymmetric in that it aims at learning a latent variable model while using the encoder as an auxiliary means only. Moreover, it requires a closed form a-priori latent distribution. This limits its applicability in more complex scenarios, such as general semi-supervised learning and employing complex generative models as priors. We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling. The flexibility and simplicity of this approach allows its application to a wide range of learning scenarios and downstream tasks.
APA
Flach, B., Schlesinger, D. & Shekhovtsov, A.. (2024). Symmetric Equilibrium Learning of VAEs. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3214-3222 Available from https://proceedings.mlr.press/v238/flach24a.html.

Related Material