Reliable and Efficient Tissue Segmentation in Whole-Slide Images

Sander Elias Magnussen Helgesen, Anthony Manet, Karolina Cyll, Kari Anne Risan Tobin, Marna Lill Kjæreng, Ilyá Kostolomov, Audun Ljone Henriksen, Sepp de Raedt, Hanne Arenberg Askautrud, Miangela Lacle, Robert Jones, Cornelis Verhoef, Tarjei Sveinsgjerd Hveem, Ole-Johan Skrede, Andreas Kleppe
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 316:223-233, 2026.

Abstract

Whole-slide images in digital pathology often contain large regions of irrelevant background, making tissue segmentation an important preprocessing step in many applications. Traditional rule-based approaches to tissue segmentation often work quite well, but it is difficult to create general rules that cover all instances. We here apply an unmodified nnU-Net v2 training setup on downsampled whole-slide to develop and test an efficient and robust tissue segmentation model. The dataset contained nearly 30 000 images from slides with different tissue types, imaged using different scanners, and annotated using a semiautomatic workflow so that all annotations have been verified or made by human experts. This large, diverse dataset enables the training of a tissue segmentation model that generalizes well across different scanners and tissue types. We observed that our proposed model achieves similar or better accuracy than other deep learning models, while offering better robustness than simpler rule-based methods. The best compromise between inference speed and accuracy was observed using images at 10 $\bm{\mu}$m per pixel. Our approach can be used as an efficient and well-suited preprocessing step for computational pathology. Source code, Dockerfiles, and model weights are made publicly available at: https://github.com/icgi/Reliable-and-Efficient-Tissue-Segmentation-in-Whole-Slide-Images.

Cite this Paper


BibTeX
@InProceedings{pmlr-v316-helgesen26a, title = {Reliable and Efficient Tissue Segmentation in Whole-Slide Images}, author = {Helgesen, Sander Elias Magnussen and Manet, Anthony and Cyll, Karolina and Tobin, Kari Anne Risan and Kj{\ae}reng, Marna Lill and Kostolomov, Ily\'a and Henriksen, Audun Ljone and Raedt, Sepp de and Askautrud, Hanne Arenberg and Lacle, Miangela and Jones, Robert and Verhoef, Cornelis and Hveem, Tarjei Sveinsgjerd and Skrede, Ole-Johan and Kleppe, Andreas}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {223--233}, year = {2026}, editor = {Studer, Linda and Ciompi, Francesco and Khalili, Nadieh and Faryna, Khrystyna and Faryna, Khrystyna and Yeong, Joe and Lau, Mai Chan and Chen, Hao and Liu, Ziyi and Brattoli, Biagio}, volume = {316}, series = {Proceedings of Machine Learning Research}, month = {27 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v316/main/assets/helgesen26a/helgesen26a.pdf}, url = {https://proceedings.mlr.press/v316/helgesen26a.html}, abstract = {Whole-slide images in digital pathology often contain large regions of irrelevant background, making tissue segmentation an important preprocessing step in many applications. Traditional rule-based approaches to tissue segmentation often work quite well, but it is difficult to create general rules that cover all instances. We here apply an unmodified nnU-Net v2 training setup on downsampled whole-slide to develop and test an efficient and robust tissue segmentation model. The dataset contained nearly 30 000 images from slides with different tissue types, imaged using different scanners, and annotated using a semiautomatic workflow so that all annotations have been verified or made by human experts. This large, diverse dataset enables the training of a tissue segmentation model that generalizes well across different scanners and tissue types. We observed that our proposed model achieves similar or better accuracy than other deep learning models, while offering better robustness than simpler rule-based methods. The best compromise between inference speed and accuracy was observed using images at 10 $\bm{\mu}$m per pixel. Our approach can be used as an efficient and well-suited preprocessing step for computational pathology. Source code, Dockerfiles, and model weights are made publicly available at: https://github.com/icgi/Reliable-and-Efficient-Tissue-Segmentation-in-Whole-Slide-Images.} }
Endnote
%0 Conference Paper %T Reliable and Efficient Tissue Segmentation in Whole-Slide Images %A Sander Elias Magnussen Helgesen %A Anthony Manet %A Karolina Cyll %A Kari Anne Risan Tobin %A Marna Lill Kjæreng %A Ilyá Kostolomov %A Audun Ljone Henriksen %A Sepp de Raedt %A Hanne Arenberg Askautrud %A Miangela Lacle %A Robert Jones %A Cornelis Verhoef %A Tarjei Sveinsgjerd Hveem %A Ole-Johan Skrede %A Andreas Kleppe %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2026 %E Linda Studer %E Francesco Ciompi %E Nadieh Khalili %E Khrystyna Faryna %E Khrystyna Faryna %E Joe Yeong %E Mai Chan Lau %E Hao Chen %E Ziyi Liu %E Biagio Brattoli %F pmlr-v316-helgesen26a %I PMLR %P 223--233 %U https://proceedings.mlr.press/v316/helgesen26a.html %V 316 %X Whole-slide images in digital pathology often contain large regions of irrelevant background, making tissue segmentation an important preprocessing step in many applications. Traditional rule-based approaches to tissue segmentation often work quite well, but it is difficult to create general rules that cover all instances. We here apply an unmodified nnU-Net v2 training setup on downsampled whole-slide to develop and test an efficient and robust tissue segmentation model. The dataset contained nearly 30 000 images from slides with different tissue types, imaged using different scanners, and annotated using a semiautomatic workflow so that all annotations have been verified or made by human experts. This large, diverse dataset enables the training of a tissue segmentation model that generalizes well across different scanners and tissue types. We observed that our proposed model achieves similar or better accuracy than other deep learning models, while offering better robustness than simpler rule-based methods. The best compromise between inference speed and accuracy was observed using images at 10 $\bm{\mu}$m per pixel. Our approach can be used as an efficient and well-suited preprocessing step for computational pathology. Source code, Dockerfiles, and model weights are made publicly available at: https://github.com/icgi/Reliable-and-Efficient-Tissue-Segmentation-in-Whole-Slide-Images.
APA
Helgesen, S.E.M., Manet, A., Cyll, K., Tobin, K.A.R., Kjæreng, M.L., Kostolomov, I., Henriksen, A.L., Raedt, S.d., Askautrud, H.A., Lacle, M., Jones, R., Verhoef, C., Hveem, T.S., Skrede, O. & Kleppe, A.. (2026). Reliable and Efficient Tissue Segmentation in Whole-Slide Images. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 316:223-233 Available from https://proceedings.mlr.press/v316/helgesen26a.html.

Related Material