Multimodal Assessment of Pancreatic Cancer Resectability Using Deep Learning

Vincent Ochs, Christoph Kuemmerli, Florentin Bieder, Julia Wolleb, Joël L. Lavanchy, Julia Ruppel, Jan Liechti, Stephanie Taha-Mehlitz, Christian Andreas Nebiker, Beat Müller, Giuseppe Kito Fusai, Joerg-Matthias Pollok, Anas Taha, Philippe C. Cattin, Sebastian Staubli
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:88-106, 2026.

Abstract

Accurate determination of pancreatic ductal adenocarcinoma (PDAC) resectability relies on evaluating how the tumor interacts with major peripancreatic vessels on CT imaging, yet expert assessment often shows substantial variability. We introduce a fully automated multimodal deep learning framework that jointly analyzes 3D contrast enhanced CT and structured clinical information to classify patients into the three National Comprehensive Cancer Network (NCCN) resectability categories (upfront resectable, borderline resectable, locally advanced). The approach uses a Swin-UNETR backbone to obtain anatomy aware image representations through auxiliary segmentation of pancreas, tumor, and vascular structures. These features are fused with a compact clinical embedding derived from 17 routinely collected variables and processed by a lightweight classification head. Model training is guided by a dynamic multitask objective that adapts the balance between segmentation and classification based on current tumor Dice performance, promoting feature representations that remain both anatomically informed and discriminative. In a cohort of 159 patients (85 upfront resectable, 47 borderline resectable, 27 locally advanced), the proposed method achieved an AUC of 0.86, a macro-F1 of 0.79, and an accuracy of 0.85 using stratified nested 5-fold cross validation, outperforming adapted transformer based and geometric baseline approaches. External validation on an independent cohort with 52 patients from Kantonsspital Aarau (KSA Aarau) yielded an AUC of 0.86, a macro-F1 of 0.81, and an accuracy of 0.87, supporting cross-institution generalization. Notably, the external KSA Aarau cohort contained complete clinical information for all variables used by the model and therefore did not require imputation. The comparable performance observed on this dataset suggests that the KNN based imputation applied to the training cohort did not introduce a detectable performance bias for the clinical variables considered. Because segmentation labels are required only during training, the final system enables mask free inference while preserving vessel aware interpretability. These findings demonstrate that integrating anatomical supervision with clinical context yields a robust and reproducible tool for supporting operability (i.e., NCCN-based resectability) assessment in pancreatic cancer. The implementation is publicly available at , and the data, as well as the weights, can be made available by the corresponding author upon reasonable request.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-ochs26a, title = {Multimodal Assessment of Pancreatic Cancer Resectability Using Deep Learning}, author = {Ochs, Vincent and Kuemmerli, Christoph and Bieder, Florentin and Wolleb, Julia and Lavanchy, Jo\"el L. and Ruppel, Julia and Liechti, Jan and Taha-Mehlitz, Stephanie and Nebiker, Christian Andreas and M\"uller, Beat and Fusai, Giuseppe Kito and Pollok, Joerg-Matthias and Taha, Anas and Cattin, Philippe C. and Staubli, Sebastian}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {88--106}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/ochs26a/ochs26a.pdf}, url = {https://proceedings.mlr.press/v315/ochs26a.html}, abstract = {Accurate determination of pancreatic ductal adenocarcinoma (PDAC) resectability relies on evaluating how the tumor interacts with major peripancreatic vessels on CT imaging, yet expert assessment often shows substantial variability. We introduce a fully automated multimodal deep learning framework that jointly analyzes 3D contrast enhanced CT and structured clinical information to classify patients into the three National Comprehensive Cancer Network (NCCN) resectability categories (upfront resectable, borderline resectable, locally advanced). The approach uses a Swin-UNETR backbone to obtain anatomy aware image representations through auxiliary segmentation of pancreas, tumor, and vascular structures. These features are fused with a compact clinical embedding derived from 17 routinely collected variables and processed by a lightweight classification head. Model training is guided by a dynamic multitask objective that adapts the balance between segmentation and classification based on current tumor Dice performance, promoting feature representations that remain both anatomically informed and discriminative. In a cohort of 159 patients (85 upfront resectable, 47 borderline resectable, 27 locally advanced), the proposed method achieved an AUC of 0.86, a macro-F1 of 0.79, and an accuracy of 0.85 using stratified nested 5-fold cross validation, outperforming adapted transformer based and geometric baseline approaches. External validation on an independent cohort with 52 patients from Kantonsspital Aarau (KSA Aarau) yielded an AUC of 0.86, a macro-F1 of 0.81, and an accuracy of 0.87, supporting cross-institution generalization. Notably, the external KSA Aarau cohort contained complete clinical information for all variables used by the model and therefore did not require imputation. The comparable performance observed on this dataset suggests that the KNN based imputation applied to the training cohort did not introduce a detectable performance bias for the clinical variables considered. Because segmentation labels are required only during training, the final system enables mask free inference while preserving vessel aware interpretability. These findings demonstrate that integrating anatomical supervision with clinical context yields a robust and reproducible tool for supporting operability (i.e., NCCN-based resectability) assessment in pancreatic cancer. The implementation is publicly available at , and the data, as well as the weights, can be made available by the corresponding author upon reasonable request.} }
Endnote
%0 Conference Paper %T Multimodal Assessment of Pancreatic Cancer Resectability Using Deep Learning %A Vincent Ochs %A Christoph Kuemmerli %A Florentin Bieder %A Julia Wolleb %A Joël L. Lavanchy %A Julia Ruppel %A Jan Liechti %A Stephanie Taha-Mehlitz %A Christian Andreas Nebiker %A Beat Müller %A Giuseppe Kito Fusai %A Joerg-Matthias Pollok %A Anas Taha %A Philippe C. Cattin %A Sebastian Staubli %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-ochs26a %I PMLR %P 88--106 %U https://proceedings.mlr.press/v315/ochs26a.html %V 315 %X Accurate determination of pancreatic ductal adenocarcinoma (PDAC) resectability relies on evaluating how the tumor interacts with major peripancreatic vessels on CT imaging, yet expert assessment often shows substantial variability. We introduce a fully automated multimodal deep learning framework that jointly analyzes 3D contrast enhanced CT and structured clinical information to classify patients into the three National Comprehensive Cancer Network (NCCN) resectability categories (upfront resectable, borderline resectable, locally advanced). The approach uses a Swin-UNETR backbone to obtain anatomy aware image representations through auxiliary segmentation of pancreas, tumor, and vascular structures. These features are fused with a compact clinical embedding derived from 17 routinely collected variables and processed by a lightweight classification head. Model training is guided by a dynamic multitask objective that adapts the balance between segmentation and classification based on current tumor Dice performance, promoting feature representations that remain both anatomically informed and discriminative. In a cohort of 159 patients (85 upfront resectable, 47 borderline resectable, 27 locally advanced), the proposed method achieved an AUC of 0.86, a macro-F1 of 0.79, and an accuracy of 0.85 using stratified nested 5-fold cross validation, outperforming adapted transformer based and geometric baseline approaches. External validation on an independent cohort with 52 patients from Kantonsspital Aarau (KSA Aarau) yielded an AUC of 0.86, a macro-F1 of 0.81, and an accuracy of 0.87, supporting cross-institution generalization. Notably, the external KSA Aarau cohort contained complete clinical information for all variables used by the model and therefore did not require imputation. The comparable performance observed on this dataset suggests that the KNN based imputation applied to the training cohort did not introduce a detectable performance bias for the clinical variables considered. Because segmentation labels are required only during training, the final system enables mask free inference while preserving vessel aware interpretability. These findings demonstrate that integrating anatomical supervision with clinical context yields a robust and reproducible tool for supporting operability (i.e., NCCN-based resectability) assessment in pancreatic cancer. The implementation is publicly available at , and the data, as well as the weights, can be made available by the corresponding author upon reasonable request.
APA
Ochs, V., Kuemmerli, C., Bieder, F., Wolleb, J., Lavanchy, J.L., Ruppel, J., Liechti, J., Taha-Mehlitz, S., Nebiker, C.A., Müller, B., Fusai, G.K., Pollok, J., Taha, A., Cattin, P.C. & Staubli, S.. (2026). Multimodal Assessment of Pancreatic Cancer Resectability Using Deep Learning. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:88-106 Available from https://proceedings.mlr.press/v315/ochs26a.html.

Related Material