Information Transfer Across Clinical Tasks via Adaptive Parameter Optimisation

Anshul Thakur, Elena Gal, Soheila Molaei, Xiao Gu, Patrick Schwab, Danielle Belgrave, Kim Branson, David A. Clifton
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3367-3375, 2025.

Abstract

This paper presents Adaptive Parameter Optimisation (APO), a novel framework for optimising shared models across multiple clinical tasks, addressing the challenges of balancing strict parameter sharing—often leading to task conflicts—and soft parameter sharing, which may limit effective cross-task information exchange. The proposed APO framework leverages insights from the lazy behaviour observed in over-parameterised neural networks, where only a small subset of parameters undergo any substantial updates during training. APO dynamically identifies and updates task-specific parameters while treating parameters associated with other tasks as protected, limiting their modification to prevent interference. The remaining unassigned parameters remain unchanged, embodying the lazy training phenomenon. This dynamic management of task-specific, protected, and unclaimed parameters across tasks enables effective information sharing, preserves task-specific adaptability, and mitigates gradient conflicts without enforcing a uniform representation. Experimental results across diverse healthcare datasets demonstrate that APO surpasses traditional information-sharing approaches, such as multi-task learning and model-agnostic meta-learning, in improving task performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-thakur25b, title = {Information Transfer Across Clinical Tasks via Adaptive Parameter Optimisation}, author = {Thakur, Anshul and Gal, Elena and Molaei, Soheila and Gu, Xiao and Schwab, Patrick and Belgrave, Danielle and Branson, Kim and Clifton, David A.}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3367--3375}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/thakur25b/thakur25b.pdf}, url = {https://proceedings.mlr.press/v258/thakur25b.html}, abstract = {This paper presents Adaptive Parameter Optimisation (APO), a novel framework for optimising shared models across multiple clinical tasks, addressing the challenges of balancing strict parameter sharing—often leading to task conflicts—and soft parameter sharing, which may limit effective cross-task information exchange. The proposed APO framework leverages insights from the lazy behaviour observed in over-parameterised neural networks, where only a small subset of parameters undergo any substantial updates during training. APO dynamically identifies and updates task-specific parameters while treating parameters associated with other tasks as protected, limiting their modification to prevent interference. The remaining unassigned parameters remain unchanged, embodying the lazy training phenomenon. This dynamic management of task-specific, protected, and unclaimed parameters across tasks enables effective information sharing, preserves task-specific adaptability, and mitigates gradient conflicts without enforcing a uniform representation. Experimental results across diverse healthcare datasets demonstrate that APO surpasses traditional information-sharing approaches, such as multi-task learning and model-agnostic meta-learning, in improving task performance.} }
Endnote
%0 Conference Paper %T Information Transfer Across Clinical Tasks via Adaptive Parameter Optimisation %A Anshul Thakur %A Elena Gal %A Soheila Molaei %A Xiao Gu %A Patrick Schwab %A Danielle Belgrave %A Kim Branson %A David A. Clifton %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-thakur25b %I PMLR %P 3367--3375 %U https://proceedings.mlr.press/v258/thakur25b.html %V 258 %X This paper presents Adaptive Parameter Optimisation (APO), a novel framework for optimising shared models across multiple clinical tasks, addressing the challenges of balancing strict parameter sharing—often leading to task conflicts—and soft parameter sharing, which may limit effective cross-task information exchange. The proposed APO framework leverages insights from the lazy behaviour observed in over-parameterised neural networks, where only a small subset of parameters undergo any substantial updates during training. APO dynamically identifies and updates task-specific parameters while treating parameters associated with other tasks as protected, limiting their modification to prevent interference. The remaining unassigned parameters remain unchanged, embodying the lazy training phenomenon. This dynamic management of task-specific, protected, and unclaimed parameters across tasks enables effective information sharing, preserves task-specific adaptability, and mitigates gradient conflicts without enforcing a uniform representation. Experimental results across diverse healthcare datasets demonstrate that APO surpasses traditional information-sharing approaches, such as multi-task learning and model-agnostic meta-learning, in improving task performance.
APA
Thakur, A., Gal, E., Molaei, S., Gu, X., Schwab, P., Belgrave, D., Branson, K. & Clifton, D.A.. (2025). Information Transfer Across Clinical Tasks via Adaptive Parameter Optimisation. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3367-3375 Available from https://proceedings.mlr.press/v258/thakur25b.html.

Related Material