Latent Multi-view Semi-Supervised Classification

Xiaofan Bo, Zhao Kang, Zhitong Zhao, Yuanzhang Su, Wenyu Chen
; Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:348-362, 2019.

Abstract

To explore underlying complementary information from multiple views, in this paper, we propose a novel Latent Multi-view Semi-Supervised Classification (LMSSC) method. Unlike most existing multi-view semi-supervised classification methods that learn the graph using original features, our method seeks an underlying latent representation and performs graph learning and label propagation based on the learned latent representation. With the complementarity of multiple views, the latent representation could depict the data more comprehensively than every single view individually, accordingly making the graph more accurate and robust as well. Finally, LMSSC integrates latent representation learning, graph construction, and label propagation into a unified framework, which makes each subtask optimized. Experimental results on real-world benchmark datasets validate the effectiveness of our proposed method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-bo19a, title = {Latent Multi-view Semi-Supervised Classification}, author = {Bo, Xiaofan and Kang, Zhao and Zhao, Zhitong and Su, Yuanzhang and Chen, Wenyu}, pages = {348--362}, year = {2019}, editor = {Wee Sun Lee and Taiji Suzuki}, volume = {101}, series = {Proceedings of Machine Learning Research}, address = {Nagoya, Japan}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/bo19a/bo19a.pdf}, url = {http://proceedings.mlr.press/v101/bo19a.html}, abstract = {To explore underlying complementary information from multiple views, in this paper, we propose a novel Latent Multi-view Semi-Supervised Classification (LMSSC) method. Unlike most existing multi-view semi-supervised classification methods that learn the graph using original features, our method seeks an underlying latent representation and performs graph learning and label propagation based on the learned latent representation. With the complementarity of multiple views, the latent representation could depict the data more comprehensively than every single view individually, accordingly making the graph more accurate and robust as well. Finally, LMSSC integrates latent representation learning, graph construction, and label propagation into a unified framework, which makes each subtask optimized. Experimental results on real-world benchmark datasets validate the effectiveness of our proposed method.} }
Endnote
%0 Conference Paper %T Latent Multi-view Semi-Supervised Classification %A Xiaofan Bo %A Zhao Kang %A Zhitong Zhao %A Yuanzhang Su %A Wenyu Chen %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-bo19a %I PMLR %J Proceedings of Machine Learning Research %P 348--362 %U http://proceedings.mlr.press %V 101 %W PMLR %X To explore underlying complementary information from multiple views, in this paper, we propose a novel Latent Multi-view Semi-Supervised Classification (LMSSC) method. Unlike most existing multi-view semi-supervised classification methods that learn the graph using original features, our method seeks an underlying latent representation and performs graph learning and label propagation based on the learned latent representation. With the complementarity of multiple views, the latent representation could depict the data more comprehensively than every single view individually, accordingly making the graph more accurate and robust as well. Finally, LMSSC integrates latent representation learning, graph construction, and label propagation into a unified framework, which makes each subtask optimized. Experimental results on real-world benchmark datasets validate the effectiveness of our proposed method.
APA
Bo, X., Kang, Z., Zhao, Z., Su, Y. & Chen, W.. (2019). Latent Multi-view Semi-Supervised Classification. Proceedings of The Eleventh Asian Conference on Machine Learning, in PMLR 101:348-362

Related Material