ADAPT: Multimodal Learning for Detecting Physiological Changes under Missing Modalities

Julie Mordacq, Leo Milecki, Maria Vakalopoulou, Steve Oudot, Vicky Kalogeiton
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:1040-1055, 2024.

Abstract

Multimodality has recently gained attention in the medical domain, where imaging or video modalities may be integrated with biomedical signals or health records. Yet, two challenges remain: balancing the contributions of modalities, especially in cases with a limited amount of data available, and tackling missing modalities. To address both issues, in this paper, we introduce the AnchoreD multimodAl Physiological Transformer (ADAPT), a multimodal, scalable framework with two key components: (i) aligning all modalities in the space of the strongest, richest modality (called anchor) to learn a joint embedding space, and (ii) a Masked Multimodal Transformer, leveraging both inter- and intra-modality correlations while handling missing modalities. We focus on detecting physiological changes in two real-life scenarios: stress in individuals induced by specific triggers and fighter pilots\’{loss} of consciousness induced by g-forces. We validate the generalizability of ADAPT through extensive experiments on two datasets for these tasks, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications. Our code is available at https://github.com/jumdc/ADAPT.git.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-mordacq24a, title = {ADAPT: Multimodal Learning for Detecting Physiological Changes under Missing Modalities}, author = {Mordacq, Julie and Milecki, Leo and Vakalopoulou, Maria and Oudot, Steve and Kalogeiton, Vicky}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {1040--1055}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/mordacq24a/mordacq24a.pdf}, url = {https://proceedings.mlr.press/v250/mordacq24a.html}, abstract = {Multimodality has recently gained attention in the medical domain, where imaging or video modalities may be integrated with biomedical signals or health records. Yet, two challenges remain: balancing the contributions of modalities, especially in cases with a limited amount of data available, and tackling missing modalities. To address both issues, in this paper, we introduce the AnchoreD multimodAl Physiological Transformer (ADAPT), a multimodal, scalable framework with two key components: (i) aligning all modalities in the space of the strongest, richest modality (called anchor) to learn a joint embedding space, and (ii) a Masked Multimodal Transformer, leveraging both inter- and intra-modality correlations while handling missing modalities. We focus on detecting physiological changes in two real-life scenarios: stress in individuals induced by specific triggers and fighter pilots\’{loss} of consciousness induced by g-forces. We validate the generalizability of ADAPT through extensive experiments on two datasets for these tasks, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications. Our code is available at https://github.com/jumdc/ADAPT.git.} }
Endnote
%0 Conference Paper %T ADAPT: Multimodal Learning for Detecting Physiological Changes under Missing Modalities %A Julie Mordacq %A Leo Milecki %A Maria Vakalopoulou %A Steve Oudot %A Vicky Kalogeiton %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-mordacq24a %I PMLR %P 1040--1055 %U https://proceedings.mlr.press/v250/mordacq24a.html %V 250 %X Multimodality has recently gained attention in the medical domain, where imaging or video modalities may be integrated with biomedical signals or health records. Yet, two challenges remain: balancing the contributions of modalities, especially in cases with a limited amount of data available, and tackling missing modalities. To address both issues, in this paper, we introduce the AnchoreD multimodAl Physiological Transformer (ADAPT), a multimodal, scalable framework with two key components: (i) aligning all modalities in the space of the strongest, richest modality (called anchor) to learn a joint embedding space, and (ii) a Masked Multimodal Transformer, leveraging both inter- and intra-modality correlations while handling missing modalities. We focus on detecting physiological changes in two real-life scenarios: stress in individuals induced by specific triggers and fighter pilots\’{loss} of consciousness induced by g-forces. We validate the generalizability of ADAPT through extensive experiments on two datasets for these tasks, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications. Our code is available at https://github.com/jumdc/ADAPT.git.
APA
Mordacq, J., Milecki, L., Vakalopoulou, M., Oudot, S. & Kalogeiton, V.. (2024). ADAPT: Multimodal Learning for Detecting Physiological Changes under Missing Modalities. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:1040-1055 Available from https://proceedings.mlr.press/v250/mordacq24a.html.

Related Material