Reducing Contextual Bias in Cardiac Magnetic Resonance Imaging Deep Learning Using Contrastive Self-Supervision

Makiya Nakashima, Donna Salem, HW Wilson Tang, Christopher Nguyen, Tae Hyun Hwang, Ding Zhao, Byung-Hak Kim, Deborah Kwon, David Chen
Proceedings of the 8th Machine Learning for Healthcare Conference, PMLR 219:473-488, 2023.

Abstract

Applying deep learning to medical imaging tasks is not straightforward due to the variable quality and relatively low volume of healthcare data. There is often considerable risk that deep learning models may use contextual cues instead of physiologically relevant features to achieve the clinical task. Although these cues can provide shortcuts to high performance within a carefully crafted training set, they often lead to poor performance in real-world applications. Contrastive self-supervision (CSS) has recently been shown to boost performance of deep learning on downstream applications in several medical imaging tasks. However, it is unclear how much of these pre-trained representations are impacted by contextual cues, both known and unknown. In this work, we evaluate how CSS pre-training can produce not only more accurate but also more trustworthy and generalizable models for clinical imaging applications. Specifically, we evaluate the saliency and accuracy of deep learning models using CSS in contrast to end-to-end supervised training and conventional transfer learning from natural image datasets using an institutional specific and public cardiomyopathy cohorts. We find that CSS pre-training models not only improve downstream diagnostic performance in each cohort, but more importantly, also produced models with higher saliency in cardiac anatomy. Our code is available at https://github.com/makiya11/ssl_spur_cmr.

Cite this Paper


BibTeX
@InProceedings{pmlr-v219-nakashima23a, title = {Reducing Contextual Bias in Cardiac Magnetic Resonance Imaging Deep Learning Using Contrastive Self-Supervision}, author = {Nakashima, Makiya and Salem, Donna and Tang, HW Wilson and Nguyen, Christopher and Hwang, Tae Hyun and Zhao, Ding and Kim, Byung-Hak and Kwon, Deborah and Chen, David}, booktitle = {Proceedings of the 8th Machine Learning for Healthcare Conference}, pages = {473--488}, year = {2023}, editor = {Deshpande, Kaivalya and Fiterau, Madalina and Joshi, Shalmali and Lipton, Zachary and Ranganath, Rajesh and Urteaga, Iñigo and Yeung, Serene}, volume = {219}, series = {Proceedings of Machine Learning Research}, month = {11--12 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v219/nakashima23a/nakashima23a.pdf}, url = {https://proceedings.mlr.press/v219/nakashima23a.html}, abstract = {Applying deep learning to medical imaging tasks is not straightforward due to the variable quality and relatively low volume of healthcare data. There is often considerable risk that deep learning models may use contextual cues instead of physiologically relevant features to achieve the clinical task. Although these cues can provide shortcuts to high performance within a carefully crafted training set, they often lead to poor performance in real-world applications. Contrastive self-supervision (CSS) has recently been shown to boost performance of deep learning on downstream applications in several medical imaging tasks. However, it is unclear how much of these pre-trained representations are impacted by contextual cues, both known and unknown. In this work, we evaluate how CSS pre-training can produce not only more accurate but also more trustworthy and generalizable models for clinical imaging applications. Specifically, we evaluate the saliency and accuracy of deep learning models using CSS in contrast to end-to-end supervised training and conventional transfer learning from natural image datasets using an institutional specific and public cardiomyopathy cohorts. We find that CSS pre-training models not only improve downstream diagnostic performance in each cohort, but more importantly, also produced models with higher saliency in cardiac anatomy. Our code is available at https://github.com/makiya11/ssl_spur_cmr.} }
Endnote
%0 Conference Paper %T Reducing Contextual Bias in Cardiac Magnetic Resonance Imaging Deep Learning Using Contrastive Self-Supervision %A Makiya Nakashima %A Donna Salem %A HW Wilson Tang %A Christopher Nguyen %A Tae Hyun Hwang %A Ding Zhao %A Byung-Hak Kim %A Deborah Kwon %A David Chen %B Proceedings of the 8th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2023 %E Kaivalya Deshpande %E Madalina Fiterau %E Shalmali Joshi %E Zachary Lipton %E Rajesh Ranganath %E Iñigo Urteaga %E Serene Yeung %F pmlr-v219-nakashima23a %I PMLR %P 473--488 %U https://proceedings.mlr.press/v219/nakashima23a.html %V 219 %X Applying deep learning to medical imaging tasks is not straightforward due to the variable quality and relatively low volume of healthcare data. There is often considerable risk that deep learning models may use contextual cues instead of physiologically relevant features to achieve the clinical task. Although these cues can provide shortcuts to high performance within a carefully crafted training set, they often lead to poor performance in real-world applications. Contrastive self-supervision (CSS) has recently been shown to boost performance of deep learning on downstream applications in several medical imaging tasks. However, it is unclear how much of these pre-trained representations are impacted by contextual cues, both known and unknown. In this work, we evaluate how CSS pre-training can produce not only more accurate but also more trustworthy and generalizable models for clinical imaging applications. Specifically, we evaluate the saliency and accuracy of deep learning models using CSS in contrast to end-to-end supervised training and conventional transfer learning from natural image datasets using an institutional specific and public cardiomyopathy cohorts. We find that CSS pre-training models not only improve downstream diagnostic performance in each cohort, but more importantly, also produced models with higher saliency in cardiac anatomy. Our code is available at https://github.com/makiya11/ssl_spur_cmr.
APA
Nakashima, M., Salem, D., Tang, H.W., Nguyen, C., Hwang, T.H., Zhao, D., Kim, B., Kwon, D. & Chen, D.. (2023). Reducing Contextual Bias in Cardiac Magnetic Resonance Imaging Deep Learning Using Contrastive Self-Supervision. Proceedings of the 8th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 219:473-488 Available from https://proceedings.mlr.press/v219/nakashima23a.html.

Related Material