Learning Robust Medical Image Segmentation with Inductive Bias

Shrajan Bhandary, Dejan Kuhn, Zahra Babaiee, Tobias Fechter, Anca-Ligia Grosu, Radu Grosu
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:3355-3373, 2026.

Abstract

Despite the success of transformer-based and convolutional neural networks in 3D medical image segmentation, current architectures exhibit limited generalisation on small datasets and under distribution shifts, especially when high-quality examples are scarce for specific structures. We introduce IB-nnU-Nets, a family of U-Net variants augmented with inductively biased filters inspired by vertebrate visual processing. Starting from a 3D U-Net backbone, we insert two 3D residual components into the second encoder block that implement on- and off-centre-surround convolutions with fixed, pre-computed weights and act as complementary edge detectors. Across multiple organ and tumour segmentation tasks, we show that equipping state-of-the-art 3D U-Nets with an IB block improves accuracy and robustness, with the strongest gains in small-data and out-of-distribution settings. The framework and trained IB-nnU-Net models are publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-bhandary26a, title = {Learning Robust Medical Image Segmentation with Inductive Bias}, author = {Bhandary, Shrajan and Kuhn, Dejan and Babaiee, Zahra and Fechter, Tobias and Grosu, Anca{-}Ligia and Grosu, Radu}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {3355--3373}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/bhandary26a/bhandary26a.pdf}, url = {https://proceedings.mlr.press/v315/bhandary26a.html}, abstract = {Despite the success of transformer-based and convolutional neural networks in 3D medical image segmentation, current architectures exhibit limited generalisation on small datasets and under distribution shifts, especially when high-quality examples are scarce for specific structures. We introduce IB-nnU-Nets, a family of U-Net variants augmented with inductively biased filters inspired by vertebrate visual processing. Starting from a 3D U-Net backbone, we insert two 3D residual components into the second encoder block that implement on- and off-centre-surround convolutions with fixed, pre-computed weights and act as complementary edge detectors. Across multiple organ and tumour segmentation tasks, we show that equipping state-of-the-art 3D U-Nets with an IB block improves accuracy and robustness, with the strongest gains in small-data and out-of-distribution settings. The framework and trained IB-nnU-Net models are publicly available.} }
Endnote
%0 Conference Paper %T Learning Robust Medical Image Segmentation with Inductive Bias %A Shrajan Bhandary %A Dejan Kuhn %A Zahra Babaiee %A Tobias Fechter %A Anca-Ligia Grosu %A Radu Grosu %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-bhandary26a %I PMLR %P 3355--3373 %U https://proceedings.mlr.press/v315/bhandary26a.html %V 315 %X Despite the success of transformer-based and convolutional neural networks in 3D medical image segmentation, current architectures exhibit limited generalisation on small datasets and under distribution shifts, especially when high-quality examples are scarce for specific structures. We introduce IB-nnU-Nets, a family of U-Net variants augmented with inductively biased filters inspired by vertebrate visual processing. Starting from a 3D U-Net backbone, we insert two 3D residual components into the second encoder block that implement on- and off-centre-surround convolutions with fixed, pre-computed weights and act as complementary edge detectors. Across multiple organ and tumour segmentation tasks, we show that equipping state-of-the-art 3D U-Nets with an IB block improves accuracy and robustness, with the strongest gains in small-data and out-of-distribution settings. The framework and trained IB-nnU-Net models are publicly available.
APA
Bhandary, S., Kuhn, D., Babaiee, Z., Fechter, T., Grosu, A. & Grosu, R.. (2026). Learning Robust Medical Image Segmentation with Inductive Bias. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:3355-3373 Available from https://proceedings.mlr.press/v315/bhandary26a.html.

Related Material