3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks

Richard Du, Varut Vardhanabhuti
Proceedings of the Third Conference on Medical Imaging with Deep Learning, PMLR 121:174-192, 2020.

Abstract

Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=$90.0%$) compared to training from scratch (DICE=$41.8%$). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v121-du20a, title = {3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks}, author = {Du, Richard and Vardhanabhuti, Varut}, booktitle = {Proceedings of the Third Conference on Medical Imaging with Deep Learning}, pages = {174--192}, year = {2020}, editor = {Arbel, Tal and Ben Ayed, Ismail and de Bruijne, Marleen and Descoteaux, Maxime and Lombaert, Herve and Pal, Christopher}, volume = {121}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v121/du20a/du20a.pdf}, url = {https://proceedings.mlr.press/v121/du20a.html}, abstract = {Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=$90.0%$) compared to training from scratch (DICE=$41.8%$). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.} }
Endnote
%0 Conference Paper %T 3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks %A Richard Du %A Varut Vardhanabhuti %B Proceedings of the Third Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2020 %E Tal Arbel %E Ismail Ben Ayed %E Marleen de Bruijne %E Maxime Descoteaux %E Herve Lombaert %E Christopher Pal %F pmlr-v121-du20a %I PMLR %P 174--192 %U https://proceedings.mlr.press/v121/du20a.html %V 121 %X Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=$90.0%$) compared to training from scratch (DICE=$41.8%$). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.
APA
Du, R. & Vardhanabhuti, V.. (2020). 3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks. Proceedings of the Third Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 121:174-192 Available from https://proceedings.mlr.press/v121/du20a.html.

Related Material