Learning Autoencoders with Relational Regularization

Hongteng Xu, Dixin Luo, Ricardo Henao, Svati Shah, Lawrence Carin
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10576-10586, 2020.

Abstract

We propose a new algorithmic framework for learning autoencoders of data distributions. In this framework, we minimize the discrepancy between the model distribution and the target one, with relational regularization on learnable latent prior. This regularization penalizes the fused Gromov-Wasserstein (FGW) distance between the latent prior and its corresponding posterior, which allows us to learn a structured prior distribution associated with the generative model in a flexible way. Moreover, it helps us co-train multiple autoencoders even if they are with heterogeneous architectures and incomparable latent spaces. We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders. Our relational regularized autoencoder (RAE) outperforms existing methods, e.g., variational autoencoder, Wasserstein autoencoder, and their variants, on generating images. Additionally, our relational co-training strategy of autoencoders achieves encouraging results in both synthesis and real-world multi-view learning tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-xu20e, title = {Learning Autoencoders with Relational Regularization}, author = {Xu, Hongteng and Luo, Dixin and Henao, Ricardo and Shah, Svati and Carin, Lawrence}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10576--10586}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/xu20e/xu20e.pdf}, url = {https://proceedings.mlr.press/v119/xu20e.html}, abstract = {We propose a new algorithmic framework for learning autoencoders of data distributions. In this framework, we minimize the discrepancy between the model distribution and the target one, with relational regularization on learnable latent prior. This regularization penalizes the fused Gromov-Wasserstein (FGW) distance between the latent prior and its corresponding posterior, which allows us to learn a structured prior distribution associated with the generative model in a flexible way. Moreover, it helps us co-train multiple autoencoders even if they are with heterogeneous architectures and incomparable latent spaces. We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders. Our relational regularized autoencoder (RAE) outperforms existing methods, e.g., variational autoencoder, Wasserstein autoencoder, and their variants, on generating images. Additionally, our relational co-training strategy of autoencoders achieves encouraging results in both synthesis and real-world multi-view learning tasks.} }
Endnote
%0 Conference Paper %T Learning Autoencoders with Relational Regularization %A Hongteng Xu %A Dixin Luo %A Ricardo Henao %A Svati Shah %A Lawrence Carin %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-xu20e %I PMLR %P 10576--10586 %U https://proceedings.mlr.press/v119/xu20e.html %V 119 %X We propose a new algorithmic framework for learning autoencoders of data distributions. In this framework, we minimize the discrepancy between the model distribution and the target one, with relational regularization on learnable latent prior. This regularization penalizes the fused Gromov-Wasserstein (FGW) distance between the latent prior and its corresponding posterior, which allows us to learn a structured prior distribution associated with the generative model in a flexible way. Moreover, it helps us co-train multiple autoencoders even if they are with heterogeneous architectures and incomparable latent spaces. We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders. Our relational regularized autoencoder (RAE) outperforms existing methods, e.g., variational autoencoder, Wasserstein autoencoder, and their variants, on generating images. Additionally, our relational co-training strategy of autoencoders achieves encouraging results in both synthesis and real-world multi-view learning tasks.
APA
Xu, H., Luo, D., Henao, R., Shah, S. & Carin, L.. (2020). Learning Autoencoders with Relational Regularization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10576-10586 Available from https://proceedings.mlr.press/v119/xu20e.html.

Related Material