MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation

Wangkai Li, Zhaoyang Li, Rui Sun, Huayu Mai, Naisong Luo, Yuan Wang, Yuwen Pan, Guoxin Xiong, Huakai Lai, Zhiwei Xiong, Tianzhu Zhang
Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images, PMLR 212:1-12, 2023.

Abstract

Automatic cell segmentation enjoys great popularity with the development of deep learning. However, existing methods tend to focus on the binary segmentation between foreground and background in a single domain, but fail to generalize to multi-modality cell images and to exploit numerous valuable unlabeled data. To mitigate these limitations, we propose a Modality-aware Anti-ambiguity UNet (MAUNet) in a unified deep model via an encoder-decoder structure for robust cell segmentation. The proposed MAUNet model enjoys several merits. First, the proposed instance-aware decode endows pixel features with better cell boundary discrimination capabilities benefiting from cell-wise distance field. And the ambiguity-aware decode aims at alleviating the domain gap caused by multimodality cell images credited to a customized anti-ambiguity proxy for domaininvariant learning. Second, we prepend the consistency regularization to enable exploration of unlabeled images, and a novel post-processing strategy to incorporate morphology prior to cell instance segmentation. Experimental results on the official validation set demonstrate the effectiveness of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v212-wangkai23a, title = {MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation }, author = {Li, Wangkai and Li, Zhaoyang and Sun, Rui and Mai, Huayu and Luo, Naisong and Wang, Yuan and Pan, Yuwen and Xiong, Guoxin and Lai, Huakai and Xiong, Zhiwei and Zhang, Tianzhu}, booktitle = {Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images}, pages = {1--12}, year = {2023}, editor = {Ma, Jun and Xie, Ronald and Gupta, Anubha and Guilherme de Almeida, José and Bader, Gary D. and Wang, Bo}, volume = {212}, series = {Proceedings of Machine Learning Research}, month = {28 Nov--09 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v212/wangkai23a/wangkai23a.pdf}, url = {https://proceedings.mlr.press/v212/wangkai23a.html}, abstract = {Automatic cell segmentation enjoys great popularity with the development of deep learning. However, existing methods tend to focus on the binary segmentation between foreground and background in a single domain, but fail to generalize to multi-modality cell images and to exploit numerous valuable unlabeled data. To mitigate these limitations, we propose a Modality-aware Anti-ambiguity UNet (MAUNet) in a unified deep model via an encoder-decoder structure for robust cell segmentation. The proposed MAUNet model enjoys several merits. First, the proposed instance-aware decode endows pixel features with better cell boundary discrimination capabilities benefiting from cell-wise distance field. And the ambiguity-aware decode aims at alleviating the domain gap caused by multimodality cell images credited to a customized anti-ambiguity proxy for domaininvariant learning. Second, we prepend the consistency regularization to enable exploration of unlabeled images, and a novel post-processing strategy to incorporate morphology prior to cell instance segmentation. Experimental results on the official validation set demonstrate the effectiveness of our method. } }
Endnote
%0 Conference Paper %T MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation %A Wangkai Li %A Zhaoyang Li %A Rui Sun %A Huayu Mai %A Naisong Luo %A Yuan Wang %A Yuwen Pan %A Guoxin Xiong %A Huakai Lai %A Zhiwei Xiong %A Tianzhu Zhang %B Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images %C Proceedings of Machine Learning Research %D 2023 %E Jun Ma %E Ronald Xie %E Anubha Gupta %E José Guilherme de Almeida %E Gary D. Bader %E Bo Wang %F pmlr-v212-wangkai23a %I PMLR %P 1--12 %U https://proceedings.mlr.press/v212/wangkai23a.html %V 212 %X Automatic cell segmentation enjoys great popularity with the development of deep learning. However, existing methods tend to focus on the binary segmentation between foreground and background in a single domain, but fail to generalize to multi-modality cell images and to exploit numerous valuable unlabeled data. To mitigate these limitations, we propose a Modality-aware Anti-ambiguity UNet (MAUNet) in a unified deep model via an encoder-decoder structure for robust cell segmentation. The proposed MAUNet model enjoys several merits. First, the proposed instance-aware decode endows pixel features with better cell boundary discrimination capabilities benefiting from cell-wise distance field. And the ambiguity-aware decode aims at alleviating the domain gap caused by multimodality cell images credited to a customized anti-ambiguity proxy for domaininvariant learning. Second, we prepend the consistency regularization to enable exploration of unlabeled images, and a novel post-processing strategy to incorporate morphology prior to cell instance segmentation. Experimental results on the official validation set demonstrate the effectiveness of our method.
APA
Li, W., Li, Z., Sun, R., Mai, H., Luo, N., Wang, Y., Pan, Y., Xiong, G., Lai, H., Xiong, Z. & Zhang, T.. (2023). MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation . Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images, in Proceedings of Machine Learning Research 212:1-12 Available from https://proceedings.mlr.press/v212/wangkai23a.html.

Related Material