MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, Pranav Rajpurkar
Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR 143:728-744, 2021.

Abstract

Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pretrained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCo-CXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v143-sowrirajan21a, title = {MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models}, author = {Sowrirajan, Hari and Yang, Jingbo and Ng, Andrew Y. and Rajpurkar, Pranav}, booktitle = {Proceedings of the Fourth Conference on Medical Imaging with Deep Learning}, pages = {728--744}, year = {2021}, editor = {Heinrich, Mattias and Dou, Qi and de Bruijne, Marleen and Lellmann, Jan and Schläfer, Alexander and Ernst, Floris}, volume = {143}, series = {Proceedings of Machine Learning Research}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v143/sowrirajan21a/sowrirajan21a.pdf}, url = {https://proceedings.mlr.press/v143/sowrirajan21a.html}, abstract = {Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pretrained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCo-CXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.} }
Endnote
%0 Conference Paper %T MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models %A Hari Sowrirajan %A Jingbo Yang %A Andrew Y. Ng %A Pranav Rajpurkar %B Proceedings of the Fourth Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2021 %E Mattias Heinrich %E Qi Dou %E Marleen de Bruijne %E Jan Lellmann %E Alexander Schläfer %E Floris Ernst %F pmlr-v143-sowrirajan21a %I PMLR %P 728--744 %U https://proceedings.mlr.press/v143/sowrirajan21a.html %V 143 %X Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pretrained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCo-CXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.
APA
Sowrirajan, H., Yang, J., Ng, A.Y. & Rajpurkar, P.. (2021). MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models. Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 143:728-744 Available from https://proceedings.mlr.press/v143/sowrirajan21a.html.

Related Material