Regularized Autoencoders via Relaxed Injective Probability Flow

Abhishek Kumar, Ben Poole, Kevin Murphy
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4292-4301, 2020.

Abstract

Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference. However, the invertibility requirement restricts models to have the same latent dimensionality as the inputs. This imposes significant architectural, memory, and computational costs, making them more challenging to scale than other classes of generative models such as Variational Autoencoders (VAEs). We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity. This also provides another perspective on regularized autoencoders (RAEs), with our final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective. We empirically demonstrate the promise of the proposed model, improving over VAEs and AEs in terms of sample quality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-kumar20a, title = {Regularized Autoencoders via Relaxed Injective Probability Flow}, author = {Kumar, Abhishek and Poole, Ben and Murphy, Kevin}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {4292--4301}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/kumar20a/kumar20a.pdf}, url = {https://proceedings.mlr.press/v108/kumar20a.html}, abstract = {Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference. However, the invertibility requirement restricts models to have the same latent dimensionality as the inputs. This imposes significant architectural, memory, and computational costs, making them more challenging to scale than other classes of generative models such as Variational Autoencoders (VAEs). We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity. This also provides another perspective on regularized autoencoders (RAEs), with our final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective. We empirically demonstrate the promise of the proposed model, improving over VAEs and AEs in terms of sample quality.} }
Endnote
%0 Conference Paper %T Regularized Autoencoders via Relaxed Injective Probability Flow %A Abhishek Kumar %A Ben Poole %A Kevin Murphy %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-kumar20a %I PMLR %P 4292--4301 %U https://proceedings.mlr.press/v108/kumar20a.html %V 108 %X Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference. However, the invertibility requirement restricts models to have the same latent dimensionality as the inputs. This imposes significant architectural, memory, and computational costs, making them more challenging to scale than other classes of generative models such as Variational Autoencoders (VAEs). We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity. This also provides another perspective on regularized autoencoders (RAEs), with our final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective. We empirically demonstrate the promise of the proposed model, improving over VAEs and AEs in terms of sample quality.
APA
Kumar, A., Poole, B. & Murphy, K.. (2020). Regularized Autoencoders via Relaxed Injective Probability Flow. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:4292-4301 Available from https://proceedings.mlr.press/v108/kumar20a.html.

Related Material