Exploring Image Augmentations for Siamese Representation Learning with Chest X-Rays

Rogier Van der Sluijs, Nandita Bhaskhar, Daniel Rubin, Curtis Langlotz, Akshay S Chaudhari
Medical Imaging with Deep Learning, PMLR 227:444-467, 2024.

Abstract

Image augmentations are quintessential for effective visual representation learning across self-supervised learning techniques. While augmentation strategies for natural imaging have been studied extensively, medical images are vastly different from their natural counterparts. Thus, it is unknown whether common augmentation strategies employed in Siamese representation learning generalize to medical images and to what extent. To address this challenge, in this study, we systematically assess the effect of various augmentations on the quality and robustness of the learned representations. We train and evaluate Siamese Networks for abnormality detection on chest X-Rays across three large datasets (MIMIC-CXR, CheXpert and VinDr-CXR). We investigate the efficacy of the learned representations through experiments involving linear probing, fine-tuning, zero-shot transfer, and data efficiency. Finally, we identify a set of augmentations that yield robust representations that generalize well to both out-of-distribution data and diseases, while outperforming supervised baselines using just zero-shot transfer and linear probes by up to 20%.

Cite this Paper


BibTeX
@InProceedings{pmlr-v227-sluijs24a, title = {Exploring Image Augmentations for Siamese Representation Learning with Chest X-Rays}, author = {der Sluijs, Rogier Van and Bhaskhar, Nandita and Rubin, Daniel and Langlotz, Curtis and Chaudhari, Akshay S}, booktitle = {Medical Imaging with Deep Learning}, pages = {444--467}, year = {2024}, editor = {Oguz, Ipek and Noble, Jack and Li, Xiaoxiao and Styner, Martin and Baumgartner, Christian and Rusu, Mirabela and Heinmann, Tobias and Kontos, Despina and Landman, Bennett and Dawant, Benoit}, volume = {227}, series = {Proceedings of Machine Learning Research}, month = {10--12 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v227/sluijs24a/sluijs24a.pdf}, url = {https://proceedings.mlr.press/v227/sluijs24a.html}, abstract = {Image augmentations are quintessential for effective visual representation learning across self-supervised learning techniques. While augmentation strategies for natural imaging have been studied extensively, medical images are vastly different from their natural counterparts. Thus, it is unknown whether common augmentation strategies employed in Siamese representation learning generalize to medical images and to what extent. To address this challenge, in this study, we systematically assess the effect of various augmentations on the quality and robustness of the learned representations. We train and evaluate Siamese Networks for abnormality detection on chest X-Rays across three large datasets (MIMIC-CXR, CheXpert and VinDr-CXR). We investigate the efficacy of the learned representations through experiments involving linear probing, fine-tuning, zero-shot transfer, and data efficiency. Finally, we identify a set of augmentations that yield robust representations that generalize well to both out-of-distribution data and diseases, while outperforming supervised baselines using just zero-shot transfer and linear probes by up to 20%.} }
Endnote
%0 Conference Paper %T Exploring Image Augmentations for Siamese Representation Learning with Chest X-Rays %A Rogier Van der Sluijs %A Nandita Bhaskhar %A Daniel Rubin %A Curtis Langlotz %A Akshay S Chaudhari %B Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ipek Oguz %E Jack Noble %E Xiaoxiao Li %E Martin Styner %E Christian Baumgartner %E Mirabela Rusu %E Tobias Heinmann %E Despina Kontos %E Bennett Landman %E Benoit Dawant %F pmlr-v227-sluijs24a %I PMLR %P 444--467 %U https://proceedings.mlr.press/v227/sluijs24a.html %V 227 %X Image augmentations are quintessential for effective visual representation learning across self-supervised learning techniques. While augmentation strategies for natural imaging have been studied extensively, medical images are vastly different from their natural counterparts. Thus, it is unknown whether common augmentation strategies employed in Siamese representation learning generalize to medical images and to what extent. To address this challenge, in this study, we systematically assess the effect of various augmentations on the quality and robustness of the learned representations. We train and evaluate Siamese Networks for abnormality detection on chest X-Rays across three large datasets (MIMIC-CXR, CheXpert and VinDr-CXR). We investigate the efficacy of the learned representations through experiments involving linear probing, fine-tuning, zero-shot transfer, and data efficiency. Finally, we identify a set of augmentations that yield robust representations that generalize well to both out-of-distribution data and diseases, while outperforming supervised baselines using just zero-shot transfer and linear probes by up to 20%.
APA
der Sluijs, R.V., Bhaskhar, N., Rubin, D., Langlotz, C. & Chaudhari, A.S.. (2024). Exploring Image Augmentations for Siamese Representation Learning with Chest X-Rays. Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 227:444-467 Available from https://proceedings.mlr.press/v227/sluijs24a.html.

Related Material