Incomplete learning of multi-modal connectome for brain disorder diagnosis via modal-mixup and deep supervision

Yanwu Yang, Hairui Chen, Zhikai Chang, Yang Xiang, Chenfei Ye, Ting Ma
Medical Imaging with Deep Learning, PMLR 227:1006-1018, 2024.

Abstract

Recently, the study of multi-modal brain networks has dramatically facilitated the efficiency in brain disorder diagnosis by characterizing multiple types of connectivity of brain networks and their intrinsic complementary information. Despite the promising performance achieved by multi-modal technologies, most existing multi-modal approaches can only learn from samples with complete modalities, which wastes a considerable amount of mono-modal data. Otherwise, most existing data imputation approaches still rely on a large number of samples with complete modalities. In this study, we propose a modal-mixup data imputation method by randomly sampling incomplete samples and synthesizing them into complete data for auxiliary training. Moreover, to mitigate the noise in the complementary information between unpaired modalities in the synthesized data, we introduce a bilateral network with deep supervision for improving and regularizing mono-modal representations with disease-specific information. Experiments on the ADNI dataset demonstrate the superiority of our proposed method for disease classification in terms of different rates of samples with complete modalities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v227-yang24b, title = {Incomplete learning of multi-modal connectome for brain disorder diagnosis via modal-mixup and deep supervision}, author = {Yang, Yanwu and Chen, Hairui and Chang, Zhikai and Xiang, Yang and Ye, Chenfei and Ma, Ting}, booktitle = {Medical Imaging with Deep Learning}, pages = {1006--1018}, year = {2024}, editor = {Oguz, Ipek and Noble, Jack and Li, Xiaoxiao and Styner, Martin and Baumgartner, Christian and Rusu, Mirabela and Heinmann, Tobias and Kontos, Despina and Landman, Bennett and Dawant, Benoit}, volume = {227}, series = {Proceedings of Machine Learning Research}, month = {10--12 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v227/yang24b/yang24b.pdf}, url = {https://proceedings.mlr.press/v227/yang24b.html}, abstract = {Recently, the study of multi-modal brain networks has dramatically facilitated the efficiency in brain disorder diagnosis by characterizing multiple types of connectivity of brain networks and their intrinsic complementary information. Despite the promising performance achieved by multi-modal technologies, most existing multi-modal approaches can only learn from samples with complete modalities, which wastes a considerable amount of mono-modal data. Otherwise, most existing data imputation approaches still rely on a large number of samples with complete modalities. In this study, we propose a modal-mixup data imputation method by randomly sampling incomplete samples and synthesizing them into complete data for auxiliary training. Moreover, to mitigate the noise in the complementary information between unpaired modalities in the synthesized data, we introduce a bilateral network with deep supervision for improving and regularizing mono-modal representations with disease-specific information. Experiments on the ADNI dataset demonstrate the superiority of our proposed method for disease classification in terms of different rates of samples with complete modalities.} }
Endnote
%0 Conference Paper %T Incomplete learning of multi-modal connectome for brain disorder diagnosis via modal-mixup and deep supervision %A Yanwu Yang %A Hairui Chen %A Zhikai Chang %A Yang Xiang %A Chenfei Ye %A Ting Ma %B Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ipek Oguz %E Jack Noble %E Xiaoxiao Li %E Martin Styner %E Christian Baumgartner %E Mirabela Rusu %E Tobias Heinmann %E Despina Kontos %E Bennett Landman %E Benoit Dawant %F pmlr-v227-yang24b %I PMLR %P 1006--1018 %U https://proceedings.mlr.press/v227/yang24b.html %V 227 %X Recently, the study of multi-modal brain networks has dramatically facilitated the efficiency in brain disorder diagnosis by characterizing multiple types of connectivity of brain networks and their intrinsic complementary information. Despite the promising performance achieved by multi-modal technologies, most existing multi-modal approaches can only learn from samples with complete modalities, which wastes a considerable amount of mono-modal data. Otherwise, most existing data imputation approaches still rely on a large number of samples with complete modalities. In this study, we propose a modal-mixup data imputation method by randomly sampling incomplete samples and synthesizing them into complete data for auxiliary training. Moreover, to mitigate the noise in the complementary information between unpaired modalities in the synthesized data, we introduce a bilateral network with deep supervision for improving and regularizing mono-modal representations with disease-specific information. Experiments on the ADNI dataset demonstrate the superiority of our proposed method for disease classification in terms of different rates of samples with complete modalities.
APA
Yang, Y., Chen, H., Chang, Z., Xiang, Y., Ye, C. & Ma, T.. (2024). Incomplete learning of multi-modal connectome for brain disorder diagnosis via modal-mixup and deep supervision. Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 227:1006-1018 Available from https://proceedings.mlr.press/v227/yang24b.html.

Related Material