Correlated Variational Auto-Encoders

Da Tang, Dawen Liang, Tony Jebara, Nicholas Ruozzi
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6135-6144, 2019.

Abstract

Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data. However, due to the i.i.d. assumption, VAEs only optimize the singleton variational distributions and fail to account for the correlations between data points, which might be crucial for learning latent representations from dataset where a priori we know correlations exist. We propose Correlated Variational Auto-Encoders (CVAEs) that can take the correlation structure into consideration when learning latent representations with VAEs. CVAEs apply a prior based on the correlation structure. To address the intractability introduced by the correlated prior, we develop an approximation by average of a set of tractable lower bounds over all maximal acyclic subgraphs of the undirected correlation graph. Experimental results on matching and link prediction on public benchmark rating datasets and spectral clustering on a synthetic dataset show the effectiveness of the proposed method over baseline algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-tang19b, title = {Correlated Variational Auto-Encoders}, author = {Tang, Da and Liang, Dawen and Jebara, Tony and Ruozzi, Nicholas}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6135--6144}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/tang19b/tang19b.pdf}, url = {https://proceedings.mlr.press/v97/tang19b.html}, abstract = {Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data. However, due to the i.i.d. assumption, VAEs only optimize the singleton variational distributions and fail to account for the correlations between data points, which might be crucial for learning latent representations from dataset where a priori we know correlations exist. We propose Correlated Variational Auto-Encoders (CVAEs) that can take the correlation structure into consideration when learning latent representations with VAEs. CVAEs apply a prior based on the correlation structure. To address the intractability introduced by the correlated prior, we develop an approximation by average of a set of tractable lower bounds over all maximal acyclic subgraphs of the undirected correlation graph. Experimental results on matching and link prediction on public benchmark rating datasets and spectral clustering on a synthetic dataset show the effectiveness of the proposed method over baseline algorithms.} }
Endnote
%0 Conference Paper %T Correlated Variational Auto-Encoders %A Da Tang %A Dawen Liang %A Tony Jebara %A Nicholas Ruozzi %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-tang19b %I PMLR %P 6135--6144 %U https://proceedings.mlr.press/v97/tang19b.html %V 97 %X Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data. However, due to the i.i.d. assumption, VAEs only optimize the singleton variational distributions and fail to account for the correlations between data points, which might be crucial for learning latent representations from dataset where a priori we know correlations exist. We propose Correlated Variational Auto-Encoders (CVAEs) that can take the correlation structure into consideration when learning latent representations with VAEs. CVAEs apply a prior based on the correlation structure. To address the intractability introduced by the correlated prior, we develop an approximation by average of a set of tractable lower bounds over all maximal acyclic subgraphs of the undirected correlation graph. Experimental results on matching and link prediction on public benchmark rating datasets and spectral clustering on a synthetic dataset show the effectiveness of the proposed method over baseline algorithms.
APA
Tang, D., Liang, D., Jebara, T. & Ruozzi, N.. (2019). Correlated Variational Auto-Encoders. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6135-6144 Available from https://proceedings.mlr.press/v97/tang19b.html.

Related Material