Deep learning-based retinal vessel segmentation with cross-modal evaluation

Luisa Sanchez Brea, Danilo Andrade De Jesus, Stefan Klein, Theo van Walsum
Proceedings of the Third Conference on Medical Imaging with Deep Learning, PMLR 121:709-720, 2020.

Abstract

This work proposes a general pipeline for retinal vessel segmentation on {\em en-face} images. The main goal is to analyse if a model trained in one of two modalities, Fundus Photography (FP) or Scanning Laser Ophthalmoscopy (SLO), is transferable to the other modality accurately. This is motivated by the lack of development and data available in {\em en-face} imaging modalities other than FP. FP and SLO images of four and two publicly available datasets, respectively, were used. First, the current approaches were reviewed in order to define a basic pipeline for vessel segmentation. A state-of-art deep learning architecture (U-net) was used, and the effect of varying the patch size and number of patches was studied by training, validating, and testing on each dataset individually. Next, the model was trained in either FP or SLO images, using the available datasets for a given modality combined. Finally, the performance of each network was tested on the other modality. The models trained on each dataset showed a performance comparable to the state-of-the art and to the inter-rater reliability. Overall, the best performance was observed for the largest patch size (256) and the maximum number of overlapped images in each dataset, with a mean sensitivity, specificity, accuracy, and Dice score of 0.89$\pm$ 0.05, 0.95$\pm$0.02, 0.95$\pm$0.02, and 0.73$\pm$0.07, respectively. Models trained and tested on the same modality presented a sensitivity, specificity, and accuracy equal or higher than 0.9. The validation on a different modality has shown significantly better sensitivity and Dice on those trained on FP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v121-sanchez-brea20a, title = {Deep learning-based retinal vessel segmentation with cross-modal evaluation}, author = {Sanchez Brea, Luisa and De Jesus, Danilo Andrade and Klein, Stefan and Walsum, Theo van}, booktitle = {Proceedings of the Third Conference on Medical Imaging with Deep Learning}, pages = {709--720}, year = {2020}, editor = {Arbel, Tal and Ben Ayed, Ismail and de Bruijne, Marleen and Descoteaux, Maxime and Lombaert, Herve and Pal, Christopher}, volume = {121}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v121/sanchez-brea20a/sanchez-brea20a.pdf}, url = {https://proceedings.mlr.press/v121/sanchez-brea20a.html}, abstract = {This work proposes a general pipeline for retinal vessel segmentation on {\em en-face} images. The main goal is to analyse if a model trained in one of two modalities, Fundus Photography (FP) or Scanning Laser Ophthalmoscopy (SLO), is transferable to the other modality accurately. This is motivated by the lack of development and data available in {\em en-face} imaging modalities other than FP. FP and SLO images of four and two publicly available datasets, respectively, were used. First, the current approaches were reviewed in order to define a basic pipeline for vessel segmentation. A state-of-art deep learning architecture (U-net) was used, and the effect of varying the patch size and number of patches was studied by training, validating, and testing on each dataset individually. Next, the model was trained in either FP or SLO images, using the available datasets for a given modality combined. Finally, the performance of each network was tested on the other modality. The models trained on each dataset showed a performance comparable to the state-of-the art and to the inter-rater reliability. Overall, the best performance was observed for the largest patch size (256) and the maximum number of overlapped images in each dataset, with a mean sensitivity, specificity, accuracy, and Dice score of 0.89$\pm$ 0.05, 0.95$\pm$0.02, 0.95$\pm$0.02, and 0.73$\pm$0.07, respectively. Models trained and tested on the same modality presented a sensitivity, specificity, and accuracy equal or higher than 0.9. The validation on a different modality has shown significantly better sensitivity and Dice on those trained on FP.} }
Endnote
%0 Conference Paper %T Deep learning-based retinal vessel segmentation with cross-modal evaluation %A Luisa Sanchez Brea %A Danilo Andrade De Jesus %A Stefan Klein %A Theo van Walsum %B Proceedings of the Third Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2020 %E Tal Arbel %E Ismail Ben Ayed %E Marleen de Bruijne %E Maxime Descoteaux %E Herve Lombaert %E Christopher Pal %F pmlr-v121-sanchez-brea20a %I PMLR %P 709--720 %U https://proceedings.mlr.press/v121/sanchez-brea20a.html %V 121 %X This work proposes a general pipeline for retinal vessel segmentation on {\em en-face} images. The main goal is to analyse if a model trained in one of two modalities, Fundus Photography (FP) or Scanning Laser Ophthalmoscopy (SLO), is transferable to the other modality accurately. This is motivated by the lack of development and data available in {\em en-face} imaging modalities other than FP. FP and SLO images of four and two publicly available datasets, respectively, were used. First, the current approaches were reviewed in order to define a basic pipeline for vessel segmentation. A state-of-art deep learning architecture (U-net) was used, and the effect of varying the patch size and number of patches was studied by training, validating, and testing on each dataset individually. Next, the model was trained in either FP or SLO images, using the available datasets for a given modality combined. Finally, the performance of each network was tested on the other modality. The models trained on each dataset showed a performance comparable to the state-of-the art and to the inter-rater reliability. Overall, the best performance was observed for the largest patch size (256) and the maximum number of overlapped images in each dataset, with a mean sensitivity, specificity, accuracy, and Dice score of 0.89$\pm$ 0.05, 0.95$\pm$0.02, 0.95$\pm$0.02, and 0.73$\pm$0.07, respectively. Models trained and tested on the same modality presented a sensitivity, specificity, and accuracy equal or higher than 0.9. The validation on a different modality has shown significantly better sensitivity and Dice on those trained on FP.
APA
Sanchez Brea, L., De Jesus, D.A., Klein, S. & Walsum, T.v.. (2020). Deep learning-based retinal vessel segmentation with cross-modal evaluation. Proceedings of the Third Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 121:709-720 Available from https://proceedings.mlr.press/v121/sanchez-brea20a.html.

Related Material