The Expressive Power of Tuning Only the Normalization Layers

Angeliki Giannou, Shashank Rajput, Dimitris Papailiopoulos
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:4130-4131, 2023.

Abstract

Feature normalization transforms such as Batch and Layer-Normalization have become indispensable ingredients of state-of-the-art deep neural networks. Recent studies on fine-tuning large pretrained models indicate that just tuning the parameters of these affine transforms can achieve high accuracy for downstream tasks. These findings open the questions about the expressive power of tuning the normalization layers of frozen networks. In this work, we take the first step towards this question and show that for random ReLU networks, finetuning only its normalization layers can reconstruct any target network that is $O(\sqrt{\text{width}})$ times smaller. We show that this holds even for randomly sparsified networks, under sufficient overparameterization, in agreement with prior empirical work.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-giannou23a, title = {The Expressive Power of Tuning Only the Normalization Layers}, author = {Giannou, Angeliki and Rajput, Shashank and Papailiopoulos, Dimitris}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {4130--4131}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/giannou23a/giannou23a.pdf}, url = {https://proceedings.mlr.press/v195/giannou23a.html}, abstract = {Feature normalization transforms such as Batch and Layer-Normalization have become indispensable ingredients of state-of-the-art deep neural networks. Recent studies on fine-tuning large pretrained models indicate that just tuning the parameters of these affine transforms can achieve high accuracy for downstream tasks. These findings open the questions about the expressive power of tuning the normalization layers of frozen networks. In this work, we take the first step towards this question and show that for random ReLU networks, finetuning only its normalization layers can reconstruct any target network that is $O(\sqrt{\text{width}})$ times smaller. We show that this holds even for randomly sparsified networks, under sufficient overparameterization, in agreement with prior empirical work.} }
Endnote
%0 Conference Paper %T The Expressive Power of Tuning Only the Normalization Layers %A Angeliki Giannou %A Shashank Rajput %A Dimitris Papailiopoulos %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-giannou23a %I PMLR %P 4130--4131 %U https://proceedings.mlr.press/v195/giannou23a.html %V 195 %X Feature normalization transforms such as Batch and Layer-Normalization have become indispensable ingredients of state-of-the-art deep neural networks. Recent studies on fine-tuning large pretrained models indicate that just tuning the parameters of these affine transforms can achieve high accuracy for downstream tasks. These findings open the questions about the expressive power of tuning the normalization layers of frozen networks. In this work, we take the first step towards this question and show that for random ReLU networks, finetuning only its normalization layers can reconstruct any target network that is $O(\sqrt{\text{width}})$ times smaller. We show that this holds even for randomly sparsified networks, under sufficient overparameterization, in agreement with prior empirical work.
APA
Giannou, A., Rajput, S. & Papailiopoulos, D.. (2023). The Expressive Power of Tuning Only the Normalization Layers. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:4130-4131 Available from https://proceedings.mlr.press/v195/giannou23a.html.

Related Material