Unscented Autoencoder

Faris Janjos, Lars Rosenbaum, Maxim Dolgov, J. Marius Zoellner
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:14758-14779, 2023.

Abstract

The Variational Autoencoder (VAE) is a seminal approach in deep generative modeling with latent variables. Interpreting its reconstruction process as a nonlinear transformation of samples from the latent posterior distribution, we apply the Unscented Transform (UT) – a well-known distribution approximation used in the Unscented Kalman Filter (UKF) from the field of filtering. A finite set of statistics called sigma points, sampled deterministically, provides a more informative and lower-variance posterior representation than the ubiquitous noise-scaling of the reparameterization trick, while ensuring higher-quality reconstruction. We further boost the performance by replacing the Kullback-Leibler (KL) divergence with the Wasserstein distribution metric that allows for a sharper posterior. Inspired by the two components, we derive a novel, deterministic-sampling flavor of the VAE, the Unscented Autoencoder (UAE), trained purely with regularization-like terms on the per-sample posterior. We empirically show competitive performance in Fréchet Inception Distance scores over closely-related models, in addition to a lower training variance than the VAE.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-janjos23a, title = {Unscented Autoencoder}, author = {Janjos, Faris and Rosenbaum, Lars and Dolgov, Maxim and Zoellner, J. Marius}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {14758--14779}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/janjos23a/janjos23a.pdf}, url = {https://proceedings.mlr.press/v202/janjos23a.html}, abstract = {The Variational Autoencoder (VAE) is a seminal approach in deep generative modeling with latent variables. Interpreting its reconstruction process as a nonlinear transformation of samples from the latent posterior distribution, we apply the Unscented Transform (UT) – a well-known distribution approximation used in the Unscented Kalman Filter (UKF) from the field of filtering. A finite set of statistics called sigma points, sampled deterministically, provides a more informative and lower-variance posterior representation than the ubiquitous noise-scaling of the reparameterization trick, while ensuring higher-quality reconstruction. We further boost the performance by replacing the Kullback-Leibler (KL) divergence with the Wasserstein distribution metric that allows for a sharper posterior. Inspired by the two components, we derive a novel, deterministic-sampling flavor of the VAE, the Unscented Autoencoder (UAE), trained purely with regularization-like terms on the per-sample posterior. We empirically show competitive performance in Fréchet Inception Distance scores over closely-related models, in addition to a lower training variance than the VAE.} }
Endnote
%0 Conference Paper %T Unscented Autoencoder %A Faris Janjos %A Lars Rosenbaum %A Maxim Dolgov %A J. Marius Zoellner %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-janjos23a %I PMLR %P 14758--14779 %U https://proceedings.mlr.press/v202/janjos23a.html %V 202 %X The Variational Autoencoder (VAE) is a seminal approach in deep generative modeling with latent variables. Interpreting its reconstruction process as a nonlinear transformation of samples from the latent posterior distribution, we apply the Unscented Transform (UT) – a well-known distribution approximation used in the Unscented Kalman Filter (UKF) from the field of filtering. A finite set of statistics called sigma points, sampled deterministically, provides a more informative and lower-variance posterior representation than the ubiquitous noise-scaling of the reparameterization trick, while ensuring higher-quality reconstruction. We further boost the performance by replacing the Kullback-Leibler (KL) divergence with the Wasserstein distribution metric that allows for a sharper posterior. Inspired by the two components, we derive a novel, deterministic-sampling flavor of the VAE, the Unscented Autoencoder (UAE), trained purely with regularization-like terms on the per-sample posterior. We empirically show competitive performance in Fréchet Inception Distance scores over closely-related models, in addition to a lower training variance than the VAE.
APA
Janjos, F., Rosenbaum, L., Dolgov, M. & Zoellner, J.M.. (2023). Unscented Autoencoder. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:14758-14779 Available from https://proceedings.mlr.press/v202/janjos23a.html.

Related Material