Multi-modal Graph Learning over UMLS Knowledge Graphs

Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
Proceedings of the 3rd Machine Learning for Health Symposium, PMLR 225:52-81, 2023.

Abstract

Clinicians are increasingly looking towards machine learning to gain insights about patient progression. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These concept representations are aggregated to represent a patient visit and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge. We provide our code on GitHub https://github.com/ratschlab/mmugl .

Cite this Paper


BibTeX
@InProceedings{pmlr-v225-burger23a, title = {Multi-modal Graph Learning over UMLS Knowledge Graphs}, author = {Burger, Manuel and R\"atsch, Gunnar and Kuznetsova, Rita}, booktitle = {Proceedings of the 3rd Machine Learning for Health Symposium}, pages = {52--81}, year = {2023}, editor = {Hegselmann, Stefan and Parziale, Antonio and Shanmugam, Divya and Tang, Shengpu and Asiedu, Mercy Nyamewaa and Chang, Serina and Hartvigsen, Tom and Singh, Harvineet}, volume = {225}, series = {Proceedings of Machine Learning Research}, month = {10 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v225/burger23a/burger23a.pdf}, url = {https://proceedings.mlr.press/v225/burger23a.html}, abstract = {Clinicians are increasingly looking towards machine learning to gain insights about patient progression. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These concept representations are aggregated to represent a patient visit and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge. We provide our code on GitHub https://github.com/ratschlab/mmugl .} }
Endnote
%0 Conference Paper %T Multi-modal Graph Learning over UMLS Knowledge Graphs %A Manuel Burger %A Gunnar Rätsch %A Rita Kuznetsova %B Proceedings of the 3rd Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2023 %E Stefan Hegselmann %E Antonio Parziale %E Divya Shanmugam %E Shengpu Tang %E Mercy Nyamewaa Asiedu %E Serina Chang %E Tom Hartvigsen %E Harvineet Singh %F pmlr-v225-burger23a %I PMLR %P 52--81 %U https://proceedings.mlr.press/v225/burger23a.html %V 225 %X Clinicians are increasingly looking towards machine learning to gain insights about patient progression. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These concept representations are aggregated to represent a patient visit and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge. We provide our code on GitHub https://github.com/ratschlab/mmugl .
APA
Burger, M., Rätsch, G. & Kuznetsova, R.. (2023). Multi-modal Graph Learning over UMLS Knowledge Graphs. Proceedings of the 3rd Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 225:52-81 Available from https://proceedings.mlr.press/v225/burger23a.html.

Related Material