CCA-Flow: Deep Multi-view Subspace Learning with Inverse Autoregressive Flow

Jia He, Feiyang Pan, Fuzhen Zhuang, Qing He
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:177-192, 2020.

Abstract

Multi-view subspace learning aims to learn a shared representation from multiple sources or views of an entity. The learned representation enables reconstruction of common patterns of multi-view data, which helps dimensional reduction, exploratory data analysis, missing view completion, and various downstream tasks. However, existing methods often use simple structured approximations of the posterior of shared latent variables for the sake of computational efficiency. Such oversimplified models have a huge impact on the inference quality and can hurt the representation power. To this end, we propose a new method for multi-view subspace learning that achieves efficient Bayesian inference with strong representation power. Our method, coined CCA-Flow, bases on variational Canonical Correlation Analysis and models the inference network as an Inverse Autoregressive Flow (IAF). With the flow-based variational inference imposed on the latent variables, the posterior approximations can be arbitrarily complex and flexible, and the model can still be efficiently trained with stochastic gradient descent. Experiments on three benchmark multi-view datasets show that our model gives improved representations of shared latent variables and has superior performance against previous works.

Cite this Paper


BibTeX
@InProceedings{pmlr-v129-he20a, title = {CCA-Flow: Deep Multi-view Subspace Learning with Inverse Autoregressive Flow}, author = {He, Jia and Pan, Feiyang and Zhuang, Fuzhen and He, Qing}, booktitle = {Proceedings of The 12th Asian Conference on Machine Learning}, pages = {177--192}, year = {2020}, editor = {Pan, Sinno Jialin and Sugiyama, Masashi}, volume = {129}, series = {Proceedings of Machine Learning Research}, month = {18--20 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v129/he20a/he20a.pdf}, url = {https://proceedings.mlr.press/v129/he20a.html}, abstract = {Multi-view subspace learning aims to learn a shared representation from multiple sources or views of an entity. The learned representation enables reconstruction of common patterns of multi-view data, which helps dimensional reduction, exploratory data analysis, missing view completion, and various downstream tasks. However, existing methods often use simple structured approximations of the posterior of shared latent variables for the sake of computational efficiency. Such oversimplified models have a huge impact on the inference quality and can hurt the representation power. To this end, we propose a new method for multi-view subspace learning that achieves efficient Bayesian inference with strong representation power. Our method, coined CCA-Flow, bases on variational Canonical Correlation Analysis and models the inference network as an Inverse Autoregressive Flow (IAF). With the flow-based variational inference imposed on the latent variables, the posterior approximations can be arbitrarily complex and flexible, and the model can still be efficiently trained with stochastic gradient descent. Experiments on three benchmark multi-view datasets show that our model gives improved representations of shared latent variables and has superior performance against previous works.} }
Endnote
%0 Conference Paper %T CCA-Flow: Deep Multi-view Subspace Learning with Inverse Autoregressive Flow %A Jia He %A Feiyang Pan %A Fuzhen Zhuang %A Qing He %B Proceedings of The 12th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Sinno Jialin Pan %E Masashi Sugiyama %F pmlr-v129-he20a %I PMLR %P 177--192 %U https://proceedings.mlr.press/v129/he20a.html %V 129 %X Multi-view subspace learning aims to learn a shared representation from multiple sources or views of an entity. The learned representation enables reconstruction of common patterns of multi-view data, which helps dimensional reduction, exploratory data analysis, missing view completion, and various downstream tasks. However, existing methods often use simple structured approximations of the posterior of shared latent variables for the sake of computational efficiency. Such oversimplified models have a huge impact on the inference quality and can hurt the representation power. To this end, we propose a new method for multi-view subspace learning that achieves efficient Bayesian inference with strong representation power. Our method, coined CCA-Flow, bases on variational Canonical Correlation Analysis and models the inference network as an Inverse Autoregressive Flow (IAF). With the flow-based variational inference imposed on the latent variables, the posterior approximations can be arbitrarily complex and flexible, and the model can still be efficiently trained with stochastic gradient descent. Experiments on three benchmark multi-view datasets show that our model gives improved representations of shared latent variables and has superior performance against previous works.
APA
He, J., Pan, F., Zhuang, F. & He, Q.. (2020). CCA-Flow: Deep Multi-view Subspace Learning with Inverse Autoregressive Flow. Proceedings of The 12th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 129:177-192 Available from https://proceedings.mlr.press/v129/he20a.html.

Related Material