Atlas-aware ConvNet for Accurate yet Robust Anatomical Segmentation

Yuan Liang, Weinan Song, Jiawei Yang, Liang Qiu, Kun Wang, Lei He
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:113-128, 2020.

Abstract

Convolutional networks (ConvNets) have achieved promising accuracy for various anatomical segmentation tasks. Despite the success, these methods can be sensitive to appearance variations that unforeseen from the training distributions. Considering the large variability of scans caused by artifacts, pathologies, and scanning setups, the robustness of ConvNets poses as a major challenge for their clinical applications, yet has not been much explored. In this paper, we propose to mitigate the challenge by enabling ConvNets’ awareness of the underlying anatomical invariances among imaging scans. Specifically, we introduce a fully convolutional Constraint Adoption Module (CAM) that incorporates probabilistic atlas priors as explicit constraints for predictions over a locally connected Conditional Random Field (CFR), which effectively reinforces the anatomical consistency of the labeling outputs. We design the CAM to be flexible for boosting various ConvNet, and compact for co-optimizing with ConvNets for fusion parameters that leads to the optimal performance. We show the advantage of such atlas priors fusion is two-fold with two brain parcellation tasks. First, our models achieve state-of-the-art accuracy among ConvNet-based methods on both datasets, by significantly reducing structural abnormalities of predictions. Second, we can largely boost the robustness of existing ConvNets, proved by: (i) testing on scans with synthetic pathologies, and (ii) training and evaluation on scans of different scanning setups across datasets. Our method is proposing to be easily adopted to existing ConvNets by fine-tuning with CAM plugged in for accuracy and robustness boosts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v129-liang20a, title = {Atlas-aware ConvNet for Accurate yet Robust Anatomical Segmentation}, author = {Liang, Yuan and Song, Weinan and Yang, Jiawei and Qiu, Liang and Wang, Kun and He, Lei}, booktitle = {Proceedings of The 12th Asian Conference on Machine Learning}, pages = {113--128}, year = {2020}, editor = {Pan, Sinno Jialin and Sugiyama, Masashi}, volume = {129}, series = {Proceedings of Machine Learning Research}, month = {18--20 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v129/liang20a/liang20a.pdf}, url = {https://proceedings.mlr.press/v129/liang20a.html}, abstract = {Convolutional networks (ConvNets) have achieved promising accuracy for various anatomical segmentation tasks. Despite the success, these methods can be sensitive to appearance variations that unforeseen from the training distributions. Considering the large variability of scans caused by artifacts, pathologies, and scanning setups, the robustness of ConvNets poses as a major challenge for their clinical applications, yet has not been much explored. In this paper, we propose to mitigate the challenge by enabling ConvNets’ awareness of the underlying anatomical invariances among imaging scans. Specifically, we introduce a fully convolutional Constraint Adoption Module (CAM) that incorporates probabilistic atlas priors as explicit constraints for predictions over a locally connected Conditional Random Field (CFR), which effectively reinforces the anatomical consistency of the labeling outputs. We design the CAM to be flexible for boosting various ConvNet, and compact for co-optimizing with ConvNets for fusion parameters that leads to the optimal performance. We show the advantage of such atlas priors fusion is two-fold with two brain parcellation tasks. First, our models achieve state-of-the-art accuracy among ConvNet-based methods on both datasets, by significantly reducing structural abnormalities of predictions. Second, we can largely boost the robustness of existing ConvNets, proved by: (i) testing on scans with synthetic pathologies, and (ii) training and evaluation on scans of different scanning setups across datasets. Our method is proposing to be easily adopted to existing ConvNets by fine-tuning with CAM plugged in for accuracy and robustness boosts. } }
Endnote
%0 Conference Paper %T Atlas-aware ConvNet for Accurate yet Robust Anatomical Segmentation %A Yuan Liang %A Weinan Song %A Jiawei Yang %A Liang Qiu %A Kun Wang %A Lei He %B Proceedings of The 12th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Sinno Jialin Pan %E Masashi Sugiyama %F pmlr-v129-liang20a %I PMLR %P 113--128 %U https://proceedings.mlr.press/v129/liang20a.html %V 129 %X Convolutional networks (ConvNets) have achieved promising accuracy for various anatomical segmentation tasks. Despite the success, these methods can be sensitive to appearance variations that unforeseen from the training distributions. Considering the large variability of scans caused by artifacts, pathologies, and scanning setups, the robustness of ConvNets poses as a major challenge for their clinical applications, yet has not been much explored. In this paper, we propose to mitigate the challenge by enabling ConvNets’ awareness of the underlying anatomical invariances among imaging scans. Specifically, we introduce a fully convolutional Constraint Adoption Module (CAM) that incorporates probabilistic atlas priors as explicit constraints for predictions over a locally connected Conditional Random Field (CFR), which effectively reinforces the anatomical consistency of the labeling outputs. We design the CAM to be flexible for boosting various ConvNet, and compact for co-optimizing with ConvNets for fusion parameters that leads to the optimal performance. We show the advantage of such atlas priors fusion is two-fold with two brain parcellation tasks. First, our models achieve state-of-the-art accuracy among ConvNet-based methods on both datasets, by significantly reducing structural abnormalities of predictions. Second, we can largely boost the robustness of existing ConvNets, proved by: (i) testing on scans with synthetic pathologies, and (ii) training and evaluation on scans of different scanning setups across datasets. Our method is proposing to be easily adopted to existing ConvNets by fine-tuning with CAM plugged in for accuracy and robustness boosts.
APA
Liang, Y., Song, W., Yang, J., Qiu, L., Wang, K. & He, L.. (2020). Atlas-aware ConvNet for Accurate yet Robust Anatomical Segmentation. Proceedings of The 12th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 129:113-128 Available from https://proceedings.mlr.press/v129/liang20a.html.

Related Material