Partial Multi-View Multi-Label Classification via Semantic Invariance Learning and Prototype Modeling

Chengliang Liu, Gehui Xu, Jie Wen, Yabo Liu, Chao Huang, Yong Xu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:32253-32267, 2024.

Abstract

The difficulty of partial multi-view multi-label learning lies in coupling the consensus of multi-view data with the task relevance of multi-label classification, under the condition where partial views and labels are unavailable. In this paper, we seek to compress cross-view representation to maximize the proportion of shared information to better predict semantic tags. To achieve this, we establish a model consistent with the information bottleneck theory for learning cross-view shared representation, minimizing non-shared information while maintaining feature validity to help increase the purity of task-relevant information. Furthermore, we model multi-label prototype instances in the latent space and learn label correlations in a data-driven manner. Our method outperforms existing state-of-the-art methods on multiple public datasets while exhibiting good compatibility with both partial and complete data. Finally, we experimentally reveal the importance of condensing shared information under the premise of information balancing, in the process of multi-view information encoding and compression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-liu24bv, title = {Partial Multi-View Multi-Label Classification via Semantic Invariance Learning and Prototype Modeling}, author = {Liu, Chengliang and Xu, Gehui and Wen, Jie and Liu, Yabo and Huang, Chao and Xu, Yong}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {32253--32267}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bv/liu24bv.pdf}, url = {https://proceedings.mlr.press/v235/liu24bv.html}, abstract = {The difficulty of partial multi-view multi-label learning lies in coupling the consensus of multi-view data with the task relevance of multi-label classification, under the condition where partial views and labels are unavailable. In this paper, we seek to compress cross-view representation to maximize the proportion of shared information to better predict semantic tags. To achieve this, we establish a model consistent with the information bottleneck theory for learning cross-view shared representation, minimizing non-shared information while maintaining feature validity to help increase the purity of task-relevant information. Furthermore, we model multi-label prototype instances in the latent space and learn label correlations in a data-driven manner. Our method outperforms existing state-of-the-art methods on multiple public datasets while exhibiting good compatibility with both partial and complete data. Finally, we experimentally reveal the importance of condensing shared information under the premise of information balancing, in the process of multi-view information encoding and compression.} }
Endnote
%0 Conference Paper %T Partial Multi-View Multi-Label Classification via Semantic Invariance Learning and Prototype Modeling %A Chengliang Liu %A Gehui Xu %A Jie Wen %A Yabo Liu %A Chao Huang %A Yong Xu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-liu24bv %I PMLR %P 32253--32267 %U https://proceedings.mlr.press/v235/liu24bv.html %V 235 %X The difficulty of partial multi-view multi-label learning lies in coupling the consensus of multi-view data with the task relevance of multi-label classification, under the condition where partial views and labels are unavailable. In this paper, we seek to compress cross-view representation to maximize the proportion of shared information to better predict semantic tags. To achieve this, we establish a model consistent with the information bottleneck theory for learning cross-view shared representation, minimizing non-shared information while maintaining feature validity to help increase the purity of task-relevant information. Furthermore, we model multi-label prototype instances in the latent space and learn label correlations in a data-driven manner. Our method outperforms existing state-of-the-art methods on multiple public datasets while exhibiting good compatibility with both partial and complete data. Finally, we experimentally reveal the importance of condensing shared information under the premise of information balancing, in the process of multi-view information encoding and compression.
APA
Liu, C., Xu, G., Wen, J., Liu, Y., Huang, C. & Xu, Y.. (2024). Partial Multi-View Multi-Label Classification via Semantic Invariance Learning and Prototype Modeling. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:32253-32267 Available from https://proceedings.mlr.press/v235/liu24bv.html.

Related Material