Inductive Bias of Gradient Descent for Weight Normalized Smooth Homogeneous Neural Nets

Depen Morwani, Harish G. Ramaswamy
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:827-880, 2022.

Abstract

We analyze the inductive bias of gradient descent for weight normalized smooth homogeneous neural nets, when trained on exponential or cross-entropy loss. We analyse both standard weight normalization (SWN) and exponential weight normalization (EWN), and show that the gradient flow path with EWN is equivalent to gradient flow on standard networks with an adaptive learning rate. We extend these results to gradient descent, and establish asymptotic relations between weights and gradients for both SWN and EWN. We also show that EWN causes weights to be updated in a way that prefers asymptotic relative sparsity. For EWN, we provide a finite-time convergence rate of the loss with gradient flow and a tight asymptotic convergence rate with gradient descent. We demonstrate our results for SWN and EWN on synthetic data sets. Experimental results on simple datasets support our claim on sparse EWN solutions, even with SGD. This demonstrates its potential applications in learning neural networks amenable to pruning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-morwani22a, title = {Inductive Bias of Gradient Descent for Weight Normalized Smooth Homogeneous Neural Nets}, author = {Morwani, Depen and Ramaswamy, Harish G.}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {827--880}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/morwani22a/morwani22a.pdf}, url = {https://proceedings.mlr.press/v167/morwani22a.html}, abstract = {We analyze the inductive bias of gradient descent for weight normalized smooth homogeneous neural nets, when trained on exponential or cross-entropy loss. We analyse both standard weight normalization (SWN) and exponential weight normalization (EWN), and show that the gradient flow path with EWN is equivalent to gradient flow on standard networks with an adaptive learning rate. We extend these results to gradient descent, and establish asymptotic relations between weights and gradients for both SWN and EWN. We also show that EWN causes weights to be updated in a way that prefers asymptotic relative sparsity. For EWN, we provide a finite-time convergence rate of the loss with gradient flow and a tight asymptotic convergence rate with gradient descent. We demonstrate our results for SWN and EWN on synthetic data sets. Experimental results on simple datasets support our claim on sparse EWN solutions, even with SGD. This demonstrates its potential applications in learning neural networks amenable to pruning.} }
Endnote
%0 Conference Paper %T Inductive Bias of Gradient Descent for Weight Normalized Smooth Homogeneous Neural Nets %A Depen Morwani %A Harish G. Ramaswamy %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-morwani22a %I PMLR %P 827--880 %U https://proceedings.mlr.press/v167/morwani22a.html %V 167 %X We analyze the inductive bias of gradient descent for weight normalized smooth homogeneous neural nets, when trained on exponential or cross-entropy loss. We analyse both standard weight normalization (SWN) and exponential weight normalization (EWN), and show that the gradient flow path with EWN is equivalent to gradient flow on standard networks with an adaptive learning rate. We extend these results to gradient descent, and establish asymptotic relations between weights and gradients for both SWN and EWN. We also show that EWN causes weights to be updated in a way that prefers asymptotic relative sparsity. For EWN, we provide a finite-time convergence rate of the loss with gradient flow and a tight asymptotic convergence rate with gradient descent. We demonstrate our results for SWN and EWN on synthetic data sets. Experimental results on simple datasets support our claim on sparse EWN solutions, even with SGD. This demonstrates its potential applications in learning neural networks amenable to pruning.
APA
Morwani, D. & Ramaswamy, H.G.. (2022). Inductive Bias of Gradient Descent for Weight Normalized Smooth Homogeneous Neural Nets. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:827-880 Available from https://proceedings.mlr.press/v167/morwani22a.html.

Related Material