Transfer Learning by Adaptive Merging of Multiple Models

Robin Geyer, Luca Corinzia, Viktor Wegmayr
Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, PMLR 102:185-196, 2019.

Abstract

Transfer learning has been an important ingredient of state-of-the-art deep learning models. In particular, it has significant impact when little data is available for the target task, such as in many medical imaging applications. Typically, transfer learning means pre-training the target model on a related task which has sufficient data available. However, often pre-trained models from several related tasks are available, and it would be desirable to transfer their combined knowledge by automatic weighting and merging. For this reason, we propose T-IMM (Transfer Incremental Mode Matching), a method to leverage several pre-trained models, which extends the concept of Incremental Mode Matching from lifelong learning to the transfer learning setting. Our method introduces layer wise mixing ratios, which are learned automatically and fuse multiple pre-trained models before fine-tuning on the new task. We demonstrate the efficacy of our method by the example of brain tumor segmentation in MRI (BRATS 2018 Challange). We show that fusing weights according to our framework, merging two models trained on general brain parcellation can greatly enhance the final model performance for small training sets when compared to standard transfer methods or state-of the art initialization. We further demonstrate that the benefit remains even when training on the entire Brats 2018 data set (255 patients).

Cite this Paper


BibTeX
@InProceedings{pmlr-v102-geyer19a, title = {Transfer Learning by Adaptive Merging of Multiple Models}, author = {Geyer, Robin and Corinzia, Luca and Wegmayr, Viktor}, booktitle = {Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning}, pages = {185--196}, year = {2019}, editor = {Cardoso, M. Jorge and Feragen, Aasa and Glocker, Ben and Konukoglu, Ender and Oguz, Ipek and Unal, Gozde and Vercauteren, Tom}, volume = {102}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v102/geyer19a/geyer19a.pdf}, url = {https://proceedings.mlr.press/v102/geyer19a.html}, abstract = {Transfer learning has been an important ingredient of state-of-the-art deep learning models. In particular, it has significant impact when little data is available for the target task, such as in many medical imaging applications. Typically, transfer learning means pre-training the target model on a related task which has sufficient data available. However, often pre-trained models from several related tasks are available, and it would be desirable to transfer their combined knowledge by automatic weighting and merging. For this reason, we propose T-IMM (Transfer Incremental Mode Matching), a method to leverage several pre-trained models, which extends the concept of Incremental Mode Matching from lifelong learning to the transfer learning setting. Our method introduces layer wise mixing ratios, which are learned automatically and fuse multiple pre-trained models before fine-tuning on the new task. We demonstrate the efficacy of our method by the example of brain tumor segmentation in MRI (BRATS 2018 Challange). We show that fusing weights according to our framework, merging two models trained on general brain parcellation can greatly enhance the final model performance for small training sets when compared to standard transfer methods or state-of the art initialization. We further demonstrate that the benefit remains even when training on the entire Brats 2018 data set (255 patients).} }
Endnote
%0 Conference Paper %T Transfer Learning by Adaptive Merging of Multiple Models %A Robin Geyer %A Luca Corinzia %A Viktor Wegmayr %B Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2019 %E M. Jorge Cardoso %E Aasa Feragen %E Ben Glocker %E Ender Konukoglu %E Ipek Oguz %E Gozde Unal %E Tom Vercauteren %F pmlr-v102-geyer19a %I PMLR %P 185--196 %U https://proceedings.mlr.press/v102/geyer19a.html %V 102 %X Transfer learning has been an important ingredient of state-of-the-art deep learning models. In particular, it has significant impact when little data is available for the target task, such as in many medical imaging applications. Typically, transfer learning means pre-training the target model on a related task which has sufficient data available. However, often pre-trained models from several related tasks are available, and it would be desirable to transfer their combined knowledge by automatic weighting and merging. For this reason, we propose T-IMM (Transfer Incremental Mode Matching), a method to leverage several pre-trained models, which extends the concept of Incremental Mode Matching from lifelong learning to the transfer learning setting. Our method introduces layer wise mixing ratios, which are learned automatically and fuse multiple pre-trained models before fine-tuning on the new task. We demonstrate the efficacy of our method by the example of brain tumor segmentation in MRI (BRATS 2018 Challange). We show that fusing weights according to our framework, merging two models trained on general brain parcellation can greatly enhance the final model performance for small training sets when compared to standard transfer methods or state-of the art initialization. We further demonstrate that the benefit remains even when training on the entire Brats 2018 data set (255 patients).
APA
Geyer, R., Corinzia, L. & Wegmayr, V.. (2019). Transfer Learning by Adaptive Merging of Multiple Models. Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 102:185-196 Available from https://proceedings.mlr.press/v102/geyer19a.html.

Related Material