[edit]
The Dynamics of Gradient Descent for Overparametrized Neural Networks
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:373-384, 2021.
Abstract
We consider the dynamics of gradient descent (GD) in overparameterized single hidden layer neural networks with a squared loss function. Recently, it has been shown that, under some conditions, the parameter values obtained using GD achieve zero training error and generalize well if the initial conditions are chosen appropriately. Here, through a Lyapunov analysis, we show that the dynamics of neural network weights under GD converge to a point which is close to the minimum norm solution subject to the condition that there is no training error when using the linear approximation to the neural network.