MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation

Li Wangkai, Li Zhaoyang, Sun Rui, Mai Huayu, Luo Naisong, Yuan Wang, Pan Yuwen, Xiong Guoxin, Lai Huakai, Xiong Zhiwei, Zhang Tianzhu
Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images, PMLR 212:1-12, 2023.

Abstract

Automatic cell segmentation enjoys great popularity with the development of deep learning. However, existing methods tend to focus on the binary segmentation between foreground and background in a single domain, but fail to generalize to multi-modality cell images and to exploit numerous valuable unlabeled data. To mitigate these limitations, we propose a Modality-aware Anti-ambiguity UNet (MAUNet) in a unified deep model via an encoder-decoder structure for robust cell segmentation. The proposed MAUNet model enjoys several merits. First, the proposed instance-aware decode endows pixel features with better cell boundary discrimination capabilities benefiting from cell-wise distance field. And the ambiguity-aware decode aims at alleviating the domain gap caused by multimodality cell images credited to a customized anti-ambiguity proxy for domaininvariant learning. Second, we prepend the consistency regularization to enable exploration of unlabeled images, and a novel post-processing strategy to incorporate morphology prior to cell instance segmentation. Experimental results on the official validation set demonstrate the effectiveness of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v212-wangkai23a, title = {MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation }, author = {Wangkai, Li and Zhaoyang, Li and Rui, Sun and Huayu, Mai and Naisong, Luo and Wang, Yuan and Yuwen, Pan and Guoxin, Xiong and Huakai, Lai and Zhiwei, Xiong and Tianzhu, Zhang}, booktitle = {Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images}, pages = {1--12}, year = {2023}, editor = {Ma, Jun and Xie, Ronald and Gupta, Anubha and Guilherme de Almeida, José and Bader, Gary D. and Wang, Bo}, volume = {212}, series = {Proceedings of Machine Learning Research}, month = {28 Nov--09 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v212/wangkai23a/wangkai23a.pdf}, url = {https://proceedings.mlr.press/v212/wangkai23a.html}, abstract = {Automatic cell segmentation enjoys great popularity with the development of deep learning. However, existing methods tend to focus on the binary segmentation between foreground and background in a single domain, but fail to generalize to multi-modality cell images and to exploit numerous valuable unlabeled data. To mitigate these limitations, we propose a Modality-aware Anti-ambiguity UNet (MAUNet) in a unified deep model via an encoder-decoder structure for robust cell segmentation. The proposed MAUNet model enjoys several merits. First, the proposed instance-aware decode endows pixel features with better cell boundary discrimination capabilities benefiting from cell-wise distance field. And the ambiguity-aware decode aims at alleviating the domain gap caused by multimodality cell images credited to a customized anti-ambiguity proxy for domaininvariant learning. Second, we prepend the consistency regularization to enable exploration of unlabeled images, and a novel post-processing strategy to incorporate morphology prior to cell instance segmentation. Experimental results on the official validation set demonstrate the effectiveness of our method. } }
Endnote
%0 Conference Paper %T MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation %A Li Wangkai %A Li Zhaoyang %A Sun Rui %A Mai Huayu %A Luo Naisong %A Yuan Wang %A Pan Yuwen %A Xiong Guoxin %A Lai Huakai %A Xiong Zhiwei %A Zhang Tianzhu %B Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images %C Proceedings of Machine Learning Research %D 2023 %E Jun Ma %E Ronald Xie %E Anubha Gupta %E José Guilherme de Almeida %E Gary D. Bader %E Bo Wang %F pmlr-v212-wangkai23a %I PMLR %P 1--12 %U https://proceedings.mlr.press/v212/wangkai23a.html %V 212 %X Automatic cell segmentation enjoys great popularity with the development of deep learning. However, existing methods tend to focus on the binary segmentation between foreground and background in a single domain, but fail to generalize to multi-modality cell images and to exploit numerous valuable unlabeled data. To mitigate these limitations, we propose a Modality-aware Anti-ambiguity UNet (MAUNet) in a unified deep model via an encoder-decoder structure for robust cell segmentation. The proposed MAUNet model enjoys several merits. First, the proposed instance-aware decode endows pixel features with better cell boundary discrimination capabilities benefiting from cell-wise distance field. And the ambiguity-aware decode aims at alleviating the domain gap caused by multimodality cell images credited to a customized anti-ambiguity proxy for domaininvariant learning. Second, we prepend the consistency regularization to enable exploration of unlabeled images, and a novel post-processing strategy to incorporate morphology prior to cell instance segmentation. Experimental results on the official validation set demonstrate the effectiveness of our method.
APA
Wangkai, L., Zhaoyang, L., Rui, S., Huayu, M., Naisong, L., Wang, Y., Yuwen, P., Guoxin, X., Huakai, L., Zhiwei, X. & Tianzhu, Z.. (2023). MAUNet: Modality-Aware Anti-Ambiguity U-Net for Multi-Modality Cell Segmentation . Proceedings of The Cell Segmentation Challenge in Multi-modality High-Resolution Microscopy Images, in Proceedings of Machine Learning Research 212:1-12 Available from https://proceedings.mlr.press/v212/wangkai23a.html.

Related Material