Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning

Fahdi Kanavati, Masayuki Tsuneki
Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR 143:338-353, 2021.

Abstract

Transfer learning from ImageNet is the go-to approach when applying deep learning to medical images. The approach is either to fine-tune a pre-trained model or use it as a feature extractor. Most modern architecture contain batch normalisation layers, and fine-tuning a model with such layers requires taking a few precautions as they consist of trainable and non-trainable weights and have two operating modes: training and inference. Attention is primarily given to the non-trainable weights used during inference, as they are the primary source of unexpected behaviour or degradation in performance during transfer learning. It is typically recommended to fine-tune the model with the batch normalisation layers kept in inference mode during both training and inference. In this paper, we pay closer attention instead to the trainable weights of the batch normalisation layers, and we explore their expressive influence in the context of transfer learning. We find that only fine-tuning the trainable weights (scale and centre) of the batch normalisation layers leads to similar performance as to fine-tuning all of the weights, with the added benefit of faster convergence. We demonstrate this on a variety of seven publicly available medical imaging datasets, using four different model architectures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v143-kanavati21a, title = {Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning}, author = {Kanavati, Fahdi and Tsuneki, Masayuki}, booktitle = {Proceedings of the Fourth Conference on Medical Imaging with Deep Learning}, pages = {338--353}, year = {2021}, editor = {Heinrich, Mattias and Dou, Qi and de Bruijne, Marleen and Lellmann, Jan and Schläfer, Alexander and Ernst, Floris}, volume = {143}, series = {Proceedings of Machine Learning Research}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v143/kanavati21a/kanavati21a.pdf}, url = {https://proceedings.mlr.press/v143/kanavati21a.html}, abstract = {Transfer learning from ImageNet is the go-to approach when applying deep learning to medical images. The approach is either to fine-tune a pre-trained model or use it as a feature extractor. Most modern architecture contain batch normalisation layers, and fine-tuning a model with such layers requires taking a few precautions as they consist of trainable and non-trainable weights and have two operating modes: training and inference. Attention is primarily given to the non-trainable weights used during inference, as they are the primary source of unexpected behaviour or degradation in performance during transfer learning. It is typically recommended to fine-tune the model with the batch normalisation layers kept in inference mode during both training and inference. In this paper, we pay closer attention instead to the trainable weights of the batch normalisation layers, and we explore their expressive influence in the context of transfer learning. We find that only fine-tuning the trainable weights (scale and centre) of the batch normalisation layers leads to similar performance as to fine-tuning all of the weights, with the added benefit of faster convergence. We demonstrate this on a variety of seven publicly available medical imaging datasets, using four different model architectures.} }
Endnote
%0 Conference Paper %T Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning %A Fahdi Kanavati %A Masayuki Tsuneki %B Proceedings of the Fourth Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2021 %E Mattias Heinrich %E Qi Dou %E Marleen de Bruijne %E Jan Lellmann %E Alexander Schläfer %E Floris Ernst %F pmlr-v143-kanavati21a %I PMLR %P 338--353 %U https://proceedings.mlr.press/v143/kanavati21a.html %V 143 %X Transfer learning from ImageNet is the go-to approach when applying deep learning to medical images. The approach is either to fine-tune a pre-trained model or use it as a feature extractor. Most modern architecture contain batch normalisation layers, and fine-tuning a model with such layers requires taking a few precautions as they consist of trainable and non-trainable weights and have two operating modes: training and inference. Attention is primarily given to the non-trainable weights used during inference, as they are the primary source of unexpected behaviour or degradation in performance during transfer learning. It is typically recommended to fine-tune the model with the batch normalisation layers kept in inference mode during both training and inference. In this paper, we pay closer attention instead to the trainable weights of the batch normalisation layers, and we explore their expressive influence in the context of transfer learning. We find that only fine-tuning the trainable weights (scale and centre) of the batch normalisation layers leads to similar performance as to fine-tuning all of the weights, with the added benefit of faster convergence. We demonstrate this on a variety of seven publicly available medical imaging datasets, using four different model architectures.
APA
Kanavati, F. & Tsuneki, M.. (2021). Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning. Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 143:338-353 Available from https://proceedings.mlr.press/v143/kanavati21a.html.

Related Material