On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent

Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake E Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:468-477, 2021.

Abstract

Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called “rich regimes”. However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-azulay21a, title = {On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent}, author = {Azulay, Shahar and Moroshko, Edward and Nacson, Mor Shpigel and Woodworth, Blake E and Srebro, Nathan and Globerson, Amir and Soudry, Daniel}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {468--477}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/azulay21a/azulay21a.pdf}, url = {https://proceedings.mlr.press/v139/azulay21a.html}, abstract = {Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called “rich regimes”. However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.} }
Endnote
%0 Conference Paper %T On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent %A Shahar Azulay %A Edward Moroshko %A Mor Shpigel Nacson %A Blake E Woodworth %A Nathan Srebro %A Amir Globerson %A Daniel Soudry %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-azulay21a %I PMLR %P 468--477 %U https://proceedings.mlr.press/v139/azulay21a.html %V 139 %X Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called “rich regimes”. However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.
APA
Azulay, S., Moroshko, E., Nacson, M.S., Woodworth, B.E., Srebro, N., Globerson, A. & Soudry, D.. (2021). On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:468-477 Available from https://proceedings.mlr.press/v139/azulay21a.html.

Related Material