Multitask radiological modality invariant landmark localization using deep reinforcement learning

Vishwa S. Parekh, Bocchieri Alex E., Vladimir Braverman, Michael A. Jacobs
; Proceedings of the Third Conference on Medical Imaging with Deep Learning, PMLR 121:588-600, 2020.

Abstract

Deep learning techniques are increasingly being developed for several applications in radiology, for example landmark and organ localization with segmentation. However, these applications to date have been limited in nature, in that, they are restricted to just a single task e.g. localization of tumors or to a specific organ using supervised training by an expert. As a result, to develop a radiological decision support system, it would need to be equipped with potentially hundreds of deep learning models with each model trained for a specific task or organ. This would be both space and computationally expensive. In addition, the true potential of deep learning methods in radiology can only be achieved when the model is adaptable and generalizable to multiple different tasks. To that end, we have developed and implemented a multitask modality invariant deep reinforcement learning framework (MIDRL) for landmark localization and segmentation in radiological applications. MIDRL was evaluated using a diverse data set containing multiparametric MRIs (mpMRI) acquired from different organs and with different imaging parameters. A 2D single agent model was trained to localize six different anatomical structures throughout the body, including, knee, trochanter, heart, kidney, breast nipple, and prostate across T1 weighted, T2 weighted, Dynamic Contrast Enhanced (DCE), Diffusion Weighted Imaging (DWI), and DIXON MRI sequences obtained from twenty-four breast, eight prostate, and twenty five whole body mpMRIs. Additionally, a 3D multi-agent model was trained to localize knee, trochanter, heart, and kidney in the whole body mpMRIs. The trained MIDRL framework produced excellent accuracy in localizing each of the anatomical landmarks. In conclusion, we developed a multitask deep reinforcement learning framework and demonstrated MIDRL�s potential towards the development of a general AI for a radiological decision support system.

Cite this Paper


BibTeX
@InProceedings{pmlr-v121-parekh20a, title = {Multitask radiological modality invariant landmark localization using deep reinforcement learning}, author = {Parekh, Vishwa S. and E., Bocchieri Alex and Braverman, Vladimir and Jacobs, Michael A.}, pages = {588--600}, year = {2020}, editor = {Tal Arbel and Ismail Ben Ayed and Marleen de Bruijne and Maxime Descoteaux and Herve Lombaert and Christopher Pal}, volume = {121}, series = {Proceedings of Machine Learning Research}, address = {Montreal, QC, Canada}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v121/parekh20a/parekh20a.pdf}, url = {http://proceedings.mlr.press/v121/parekh20a.html}, abstract = {Deep learning techniques are increasingly being developed for several applications in radiology, for example landmark and organ localization with segmentation. However, these applications to date have been limited in nature, in that, they are restricted to just a single task e.g. localization of tumors or to a specific organ using supervised training by an expert. As a result, to develop a radiological decision support system, it would need to be equipped with potentially hundreds of deep learning models with each model trained for a specific task or organ. This would be both space and computationally expensive. In addition, the true potential of deep learning methods in radiology can only be achieved when the model is adaptable and generalizable to multiple different tasks. To that end, we have developed and implemented a multitask modality invariant deep reinforcement learning framework (MIDRL) for landmark localization and segmentation in radiological applications. MIDRL was evaluated using a diverse data set containing multiparametric MRIs (mpMRI) acquired from different organs and with different imaging parameters. A 2D single agent model was trained to localize six different anatomical structures throughout the body, including, knee, trochanter, heart, kidney, breast nipple, and prostate across T1 weighted, T2 weighted, Dynamic Contrast Enhanced (DCE), Diffusion Weighted Imaging (DWI), and DIXON MRI sequences obtained from twenty-four breast, eight prostate, and twenty five whole body mpMRIs. Additionally, a 3D multi-agent model was trained to localize knee, trochanter, heart, and kidney in the whole body mpMRIs. The trained MIDRL framework produced excellent accuracy in localizing each of the anatomical landmarks. In conclusion, we developed a multitask deep reinforcement learning framework and demonstrated MIDRL�s potential towards the development of a general AI for a radiological decision support system.} }
Endnote
%0 Conference Paper %T Multitask radiological modality invariant landmark localization using deep reinforcement learning %A Vishwa S. Parekh %A Bocchieri Alex E. %A Vladimir Braverman %A Michael A. Jacobs %B Proceedings of the Third Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2020 %E Tal Arbel %E Ismail Ben Ayed %E Marleen de Bruijne %E Maxime Descoteaux %E Herve Lombaert %E Christopher Pal %F pmlr-v121-parekh20a %I PMLR %J Proceedings of Machine Learning Research %P 588--600 %U http://proceedings.mlr.press %V 121 %W PMLR %X Deep learning techniques are increasingly being developed for several applications in radiology, for example landmark and organ localization with segmentation. However, these applications to date have been limited in nature, in that, they are restricted to just a single task e.g. localization of tumors or to a specific organ using supervised training by an expert. As a result, to develop a radiological decision support system, it would need to be equipped with potentially hundreds of deep learning models with each model trained for a specific task or organ. This would be both space and computationally expensive. In addition, the true potential of deep learning methods in radiology can only be achieved when the model is adaptable and generalizable to multiple different tasks. To that end, we have developed and implemented a multitask modality invariant deep reinforcement learning framework (MIDRL) for landmark localization and segmentation in radiological applications. MIDRL was evaluated using a diverse data set containing multiparametric MRIs (mpMRI) acquired from different organs and with different imaging parameters. A 2D single agent model was trained to localize six different anatomical structures throughout the body, including, knee, trochanter, heart, kidney, breast nipple, and prostate across T1 weighted, T2 weighted, Dynamic Contrast Enhanced (DCE), Diffusion Weighted Imaging (DWI), and DIXON MRI sequences obtained from twenty-four breast, eight prostate, and twenty five whole body mpMRIs. Additionally, a 3D multi-agent model was trained to localize knee, trochanter, heart, and kidney in the whole body mpMRIs. The trained MIDRL framework produced excellent accuracy in localizing each of the anatomical landmarks. In conclusion, we developed a multitask deep reinforcement learning framework and demonstrated MIDRL�s potential towards the development of a general AI for a radiological decision support system.
APA
Parekh, V.S., E., B.A., Braverman, V. & Jacobs, M.A.. (2020). Multitask radiological modality invariant landmark localization using deep reinforcement learning. Proceedings of the Third Conference on Medical Imaging with Deep Learning, in PMLR 121:588-600

Related Material