Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification

Jie Wen, Yadong Liu, Zhanyan Tang, Yuting He, Yulong Chen, Mu Li, Chengliang Liu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:66467-66480, 2025.

Abstract

Multi-view data involves various data forms, such as multi-feature, multi-sequence and multimodal data, providing rich semantic information for downstream tasks. The inherent challenge of incomplete multi-view missing multi-label learning lies in how to effectively utilize limited supervision and insufficient data to learn discriminative representation. Starting from the sufficiency of multi-view shared information for downstream tasks, we argue that the existing contrastive learning paradigms on missing multi-view data show limited consistency representation learning ability, leading to the bottleneck in extracting multi-view shared information. In response, we propose to minimize task-independent redundant information by pursuing the maximization of cross-view mutual information. Additionally, to alleviate the hindrance caused by missing labels, we develop a dual-branch soft pseudo-label cross-imputation strategy to improve classification performance. Extensive experiments on multiple benchmarks validate our advantages and demonstrate strong compatibility with both missing and complete data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wen25c, title = {Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification}, author = {Wen, Jie and Liu, Yadong and Tang, Zhanyan and He, Yuting and Chen, Yulong and Li, Mu and Liu, Chengliang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {66467--66480}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wen25c/wen25c.pdf}, url = {https://proceedings.mlr.press/v267/wen25c.html}, abstract = {Multi-view data involves various data forms, such as multi-feature, multi-sequence and multimodal data, providing rich semantic information for downstream tasks. The inherent challenge of incomplete multi-view missing multi-label learning lies in how to effectively utilize limited supervision and insufficient data to learn discriminative representation. Starting from the sufficiency of multi-view shared information for downstream tasks, we argue that the existing contrastive learning paradigms on missing multi-view data show limited consistency representation learning ability, leading to the bottleneck in extracting multi-view shared information. In response, we propose to minimize task-independent redundant information by pursuing the maximization of cross-view mutual information. Additionally, to alleviate the hindrance caused by missing labels, we develop a dual-branch soft pseudo-label cross-imputation strategy to improve classification performance. Extensive experiments on multiple benchmarks validate our advantages and demonstrate strong compatibility with both missing and complete data.} }
Endnote
%0 Conference Paper %T Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification %A Jie Wen %A Yadong Liu %A Zhanyan Tang %A Yuting He %A Yulong Chen %A Mu Li %A Chengliang Liu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wen25c %I PMLR %P 66467--66480 %U https://proceedings.mlr.press/v267/wen25c.html %V 267 %X Multi-view data involves various data forms, such as multi-feature, multi-sequence and multimodal data, providing rich semantic information for downstream tasks. The inherent challenge of incomplete multi-view missing multi-label learning lies in how to effectively utilize limited supervision and insufficient data to learn discriminative representation. Starting from the sufficiency of multi-view shared information for downstream tasks, we argue that the existing contrastive learning paradigms on missing multi-view data show limited consistency representation learning ability, leading to the bottleneck in extracting multi-view shared information. In response, we propose to minimize task-independent redundant information by pursuing the maximization of cross-view mutual information. Additionally, to alleviate the hindrance caused by missing labels, we develop a dual-branch soft pseudo-label cross-imputation strategy to improve classification performance. Extensive experiments on multiple benchmarks validate our advantages and demonstrate strong compatibility with both missing and complete data.
APA
Wen, J., Liu, Y., Tang, Z., He, Y., Chen, Y., Li, M. & Liu, C.. (2025). Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:66467-66480 Available from https://proceedings.mlr.press/v267/wen25c.html.

Related Material