Task-Conditioned 3D U-Nets via Hypernetworks for Data-Scarce Medical Segmentation

Luca Hagen, Johanna P. Müller, Moritz Gmeiner, Bernhard Kainz
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:3542-3560, 2026.

Abstract

Training 3D segmentation models typically requires extensive expert annotation, which is costly and often unavailable for rare or low-prevalence pathologies. We propose a hypernetwork-based framework that amortises the prediction of parameters for compact 3D U-Nets, enabling task-specific specialisation from as little as a single annotated volume. By learning shared anatomical structure, such as coarse shape, scale, and spatial organisation, across organs and imaging modalities, the hypernetwork generates task-conditioned network parameters, allowing controlled adaptation to previously unseen but anatomically related targets without full retraining. We evaluate the proposed approach on the CT TotalSegmentator and Medical Segmentation Decathlon benchmarks. The method achieves strong one-shot performance for anatomically homogeneous structures (e.g., liver, spleen, atrium) and demonstrates stable few-shot adaptation for more heterogeneous or low-contrast targets (e.g., tumours, prostate). In regimes with two to four annotated volumes, hypernetwork-generated U-Nets consistently outperform pretrained baselines and substantially reduce the performance gap to fully supervised models while using minimal annotation. These results indicate that weight prediction serves as an effective task-informed prior for data-scarce 3D medical image segmentation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-hagen26a, title = {Task-Conditioned 3D U-Nets via Hypernetworks for Data-Scarce Medical Segmentation}, author = {Hagen, Luca and M{\"u}ller, Johanna P. and Gmeiner, Moritz and Kainz, Bernhard}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {3542--3560}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/hagen26a/hagen26a.pdf}, url = {https://proceedings.mlr.press/v315/hagen26a.html}, abstract = {Training 3D segmentation models typically requires extensive expert annotation, which is costly and often unavailable for rare or low-prevalence pathologies. We propose a hypernetwork-based framework that amortises the prediction of parameters for compact 3D U-Nets, enabling task-specific specialisation from as little as a single annotated volume. By learning shared anatomical structure, such as coarse shape, scale, and spatial organisation, across organs and imaging modalities, the hypernetwork generates task-conditioned network parameters, allowing controlled adaptation to previously unseen but anatomically related targets without full retraining. We evaluate the proposed approach on the CT TotalSegmentator and Medical Segmentation Decathlon benchmarks. The method achieves strong one-shot performance for anatomically homogeneous structures (e.g., liver, spleen, atrium) and demonstrates stable few-shot adaptation for more heterogeneous or low-contrast targets (e.g., tumours, prostate). In regimes with two to four annotated volumes, hypernetwork-generated U-Nets consistently outperform pretrained baselines and substantially reduce the performance gap to fully supervised models while using minimal annotation. These results indicate that weight prediction serves as an effective task-informed prior for data-scarce 3D medical image segmentation.} }
Endnote
%0 Conference Paper %T Task-Conditioned 3D U-Nets via Hypernetworks for Data-Scarce Medical Segmentation %A Luca Hagen %A Johanna P. Müller %A Moritz Gmeiner %A Bernhard Kainz %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-hagen26a %I PMLR %P 3542--3560 %U https://proceedings.mlr.press/v315/hagen26a.html %V 315 %X Training 3D segmentation models typically requires extensive expert annotation, which is costly and often unavailable for rare or low-prevalence pathologies. We propose a hypernetwork-based framework that amortises the prediction of parameters for compact 3D U-Nets, enabling task-specific specialisation from as little as a single annotated volume. By learning shared anatomical structure, such as coarse shape, scale, and spatial organisation, across organs and imaging modalities, the hypernetwork generates task-conditioned network parameters, allowing controlled adaptation to previously unseen but anatomically related targets without full retraining. We evaluate the proposed approach on the CT TotalSegmentator and Medical Segmentation Decathlon benchmarks. The method achieves strong one-shot performance for anatomically homogeneous structures (e.g., liver, spleen, atrium) and demonstrates stable few-shot adaptation for more heterogeneous or low-contrast targets (e.g., tumours, prostate). In regimes with two to four annotated volumes, hypernetwork-generated U-Nets consistently outperform pretrained baselines and substantially reduce the performance gap to fully supervised models while using minimal annotation. These results indicate that weight prediction serves as an effective task-informed prior for data-scarce 3D medical image segmentation.
APA
Hagen, L., Müller, J.P., Gmeiner, M. & Kainz, B.. (2026). Task-Conditioned 3D U-Nets via Hypernetworks for Data-Scarce Medical Segmentation. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:3542-3560 Available from https://proceedings.mlr.press/v315/hagen26a.html.

Related Material