The late-stage training dynamics of (stochastic) subgradient descent on homogeneous neural networks

Sholom Schechtman, Nicolas Schreuder
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:5143-5172, 2025.

Abstract

We analyze the implicit bias of constant step stochastic subgradient descent (SGD). We consider the setting of binary classification with homogeneous neural networks – a large class of deep neural networks with $\ReLU$-type activation functions such as MLPs and CNNs without biases. We interpret the dynamics of normalized SGD iterates as an Euler-like discretization of a conservative field flow that is naturally associated to the normalized classification margin. Owing to this interpretation, we show that normalized SGD iterates converge to the set of critical points of the normalized margin at late-stage training (i.e., assuming that the data is correctly classified with positive normalized margin). Up to our knowledge, this is the first extension of the analysis of Lyu and Li (2020) on the discrete dynamics of gradient descent to the nonsmooth and stochastic setting. Our main result applies to binary classification with exponential or logistic losses. We additionally discuss extensions to more general settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-schechtman25a, title = {The late-stage training dynamics of (stochastic) subgradient descent on homogeneous neural networks}, author = {Schechtman, Sholom and Schreuder, Nicolas}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {5143--5172}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/schechtman25a/schechtman25a.pdf}, url = {https://proceedings.mlr.press/v291/schechtman25a.html}, abstract = {We analyze the implicit bias of constant step stochastic subgradient descent (SGD). We consider the setting of binary classification with homogeneous neural networks – a large class of deep neural networks with $\ReLU$-type activation functions such as MLPs and CNNs without biases. We interpret the dynamics of normalized SGD iterates as an Euler-like discretization of a conservative field flow that is naturally associated to the normalized classification margin. Owing to this interpretation, we show that normalized SGD iterates converge to the set of critical points of the normalized margin at late-stage training (i.e., assuming that the data is correctly classified with positive normalized margin). Up to our knowledge, this is the first extension of the analysis of Lyu and Li (2020) on the discrete dynamics of gradient descent to the nonsmooth and stochastic setting. Our main result applies to binary classification with exponential or logistic losses. We additionally discuss extensions to more general settings.} }
Endnote
%0 Conference Paper %T The late-stage training dynamics of (stochastic) subgradient descent on homogeneous neural networks %A Sholom Schechtman %A Nicolas Schreuder %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-schechtman25a %I PMLR %P 5143--5172 %U https://proceedings.mlr.press/v291/schechtman25a.html %V 291 %X We analyze the implicit bias of constant step stochastic subgradient descent (SGD). We consider the setting of binary classification with homogeneous neural networks – a large class of deep neural networks with $\ReLU$-type activation functions such as MLPs and CNNs without biases. We interpret the dynamics of normalized SGD iterates as an Euler-like discretization of a conservative field flow that is naturally associated to the normalized classification margin. Owing to this interpretation, we show that normalized SGD iterates converge to the set of critical points of the normalized margin at late-stage training (i.e., assuming that the data is correctly classified with positive normalized margin). Up to our knowledge, this is the first extension of the analysis of Lyu and Li (2020) on the discrete dynamics of gradient descent to the nonsmooth and stochastic setting. Our main result applies to binary classification with exponential or logistic losses. We additionally discuss extensions to more general settings.
APA
Schechtman, S. & Schreuder, N.. (2025). The late-stage training dynamics of (stochastic) subgradient descent on homogeneous neural networks. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:5143-5172 Available from https://proceedings.mlr.press/v291/schechtman25a.html.

Related Material