CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images

Matias Cosarinsky, Nicolas Gaggion, Rodrigo Echeveste, Enzo Ferrante
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:2192-2207, 2026.

Abstract

In this work, we study uncertainty estimation for anatomical landmark-based segmentation on chest X-rays. Inspired by hybrid neural network architectures that combine standard image convolutional encoders with graph-based generative decoders, and leveraging their variational latent space, we derive two complementary measures: (i) latent uncertainty, captured directly from the learned distribution parameters, and (ii) predictive uncertainty, obtained by generating multiple stochastic output predictions from latent samples. Through controlled corruption experiments we show that both uncertainty measures increase with perturbation severity, reflecting both global and local degradation. We demonstrate that these uncertainty signals can identify unreliable predictions by comparing with manual ground-truth, and support out-of-distribution detection on the CheXmask dataset. More importantly, we release CheXmask-U (dataset), a large scale dataset of 657,566 chest X-ray landmark segmentations with per-node uncertainty estimates, enabling researchers to account for spatial variations in segmentation quality when using these anatomical masks. Our findings establish uncertainty estimation as a promising direction to enhance robustness and safe deployment of landmark-based anatomical segmentation methods in chest X-ray. A fully working interactive demo of the method is available at CheXmask-U-demo and the source code at CheXmask-U-code.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-cosarinsky26a, title = {CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images}, author = {Cosarinsky, Matias and Gaggion, Nicolas and Echeveste, Rodrigo and Ferrante, Enzo}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {2192--2207}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/cosarinsky26a/cosarinsky26a.pdf}, url = {https://proceedings.mlr.press/v315/cosarinsky26a.html}, abstract = {In this work, we study uncertainty estimation for anatomical landmark-based segmentation on chest X-rays. Inspired by hybrid neural network architectures that combine standard image convolutional encoders with graph-based generative decoders, and leveraging their variational latent space, we derive two complementary measures: (i) latent uncertainty, captured directly from the learned distribution parameters, and (ii) predictive uncertainty, obtained by generating multiple stochastic output predictions from latent samples. Through controlled corruption experiments we show that both uncertainty measures increase with perturbation severity, reflecting both global and local degradation. We demonstrate that these uncertainty signals can identify unreliable predictions by comparing with manual ground-truth, and support out-of-distribution detection on the CheXmask dataset. More importantly, we release CheXmask-U (dataset), a large scale dataset of 657,566 chest X-ray landmark segmentations with per-node uncertainty estimates, enabling researchers to account for spatial variations in segmentation quality when using these anatomical masks. Our findings establish uncertainty estimation as a promising direction to enhance robustness and safe deployment of landmark-based anatomical segmentation methods in chest X-ray. A fully working interactive demo of the method is available at CheXmask-U-demo and the source code at CheXmask-U-code.} }
Endnote
%0 Conference Paper %T CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images %A Matias Cosarinsky %A Nicolas Gaggion %A Rodrigo Echeveste %A Enzo Ferrante %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-cosarinsky26a %I PMLR %P 2192--2207 %U https://proceedings.mlr.press/v315/cosarinsky26a.html %V 315 %X In this work, we study uncertainty estimation for anatomical landmark-based segmentation on chest X-rays. Inspired by hybrid neural network architectures that combine standard image convolutional encoders with graph-based generative decoders, and leveraging their variational latent space, we derive two complementary measures: (i) latent uncertainty, captured directly from the learned distribution parameters, and (ii) predictive uncertainty, obtained by generating multiple stochastic output predictions from latent samples. Through controlled corruption experiments we show that both uncertainty measures increase with perturbation severity, reflecting both global and local degradation. We demonstrate that these uncertainty signals can identify unreliable predictions by comparing with manual ground-truth, and support out-of-distribution detection on the CheXmask dataset. More importantly, we release CheXmask-U (dataset), a large scale dataset of 657,566 chest X-ray landmark segmentations with per-node uncertainty estimates, enabling researchers to account for spatial variations in segmentation quality when using these anatomical masks. Our findings establish uncertainty estimation as a promising direction to enhance robustness and safe deployment of landmark-based anatomical segmentation methods in chest X-ray. A fully working interactive demo of the method is available at CheXmask-U-demo and the source code at CheXmask-U-code.
APA
Cosarinsky, M., Gaggion, N., Echeveste, R. & Ferrante, E.. (2026). CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:2192-2207 Available from https://proceedings.mlr.press/v315/cosarinsky26a.html.

Related Material