CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray Segmentation

Soham Uday Gadgil, Mark Endo, Emily Wen, Andrew Y. Ng, Pranav Rajpurkar
Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR 143:190-204, 2021.

Abstract

Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire. In this work, we propose a method that combines the high quality of pixel-level expert annotations with the scale of coarse DNN-generated saliency maps for training multi-label semantic segmentation models. We demonstrate the application of our semi-supervised method, which we call CheXseg, on multi-label chest X-ray interpretation. We find that CheXseg improves upon the performance (mIoU) of fully-supervised methods that use only pixel-level expert annotations by 9.7% and weakly-supervised methods that use only DNN-generated saliency maps by 73.1%. Our best method is able to match radiologist agreement on three out of ten pathologies and reduces the overall performance gap by 57.2% as compared to weakly-supervised methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v143-gadgil21a, title = {CheXseg: Combining Expert Annotations with {DNN}-generated Saliency Maps for X-ray Segmentation}, author = {Gadgil, Soham Uday and Endo, Mark and Wen, Emily and Ng, Andrew Y. and Rajpurkar, Pranav}, booktitle = {Proceedings of the Fourth Conference on Medical Imaging with Deep Learning}, pages = {190--204}, year = {2021}, editor = {Heinrich, Mattias and Dou, Qi and de Bruijne, Marleen and Lellmann, Jan and Schläfer, Alexander and Ernst, Floris}, volume = {143}, series = {Proceedings of Machine Learning Research}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v143/gadgil21a/gadgil21a.pdf}, url = {https://proceedings.mlr.press/v143/gadgil21a.html}, abstract = {Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire. In this work, we propose a method that combines the high quality of pixel-level expert annotations with the scale of coarse DNN-generated saliency maps for training multi-label semantic segmentation models. We demonstrate the application of our semi-supervised method, which we call CheXseg, on multi-label chest X-ray interpretation. We find that CheXseg improves upon the performance (mIoU) of fully-supervised methods that use only pixel-level expert annotations by 9.7% and weakly-supervised methods that use only DNN-generated saliency maps by 73.1%. Our best method is able to match radiologist agreement on three out of ten pathologies and reduces the overall performance gap by 57.2% as compared to weakly-supervised methods.} }
Endnote
%0 Conference Paper %T CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray Segmentation %A Soham Uday Gadgil %A Mark Endo %A Emily Wen %A Andrew Y. Ng %A Pranav Rajpurkar %B Proceedings of the Fourth Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2021 %E Mattias Heinrich %E Qi Dou %E Marleen de Bruijne %E Jan Lellmann %E Alexander Schläfer %E Floris Ernst %F pmlr-v143-gadgil21a %I PMLR %P 190--204 %U https://proceedings.mlr.press/v143/gadgil21a.html %V 143 %X Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire. In this work, we propose a method that combines the high quality of pixel-level expert annotations with the scale of coarse DNN-generated saliency maps for training multi-label semantic segmentation models. We demonstrate the application of our semi-supervised method, which we call CheXseg, on multi-label chest X-ray interpretation. We find that CheXseg improves upon the performance (mIoU) of fully-supervised methods that use only pixel-level expert annotations by 9.7% and weakly-supervised methods that use only DNN-generated saliency maps by 73.1%. Our best method is able to match radiologist agreement on three out of ten pathologies and reduces the overall performance gap by 57.2% as compared to weakly-supervised methods.
APA
Gadgil, S.U., Endo, M., Wen, E., Ng, A.Y. & Rajpurkar, P.. (2021). CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray Segmentation. Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 143:190-204 Available from https://proceedings.mlr.press/v143/gadgil21a.html.

Related Material