Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks

Yuhang Cai, Kangjie Zhou, Jingfeng Wu, Song Mei, Michael Lindsey, Peter Bartlett
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:6426-6504, 2025.

Abstract

We establish the asymptotic implicit bias of gradient descent (GD) for generic non-homogeneous deep networks under exponential loss. Specifically, we characterize three key properties of GD iterates starting from a sufficiently small empirical risk, where the threshold is determined by a measure of the network’s non-homogeneity. First, we show that a normalized margin induced by the GD iterates increases nearly monotonically. Second, we prove that while the norm of the GD iterates diverges to infinity, the iterates themselves converge in direction. Finally, we establish that this directional limit satisfies the Karush–Kuhn–Tucker (KKT) conditions of a margin maximization problem. Prior works on implicit bias have focused exclusively on homogeneous networks; in contrast, our results apply to a broad class of non-homogeneous networks satisfying a mild near-homogeneity condition. In particular, our results apply to networks with residual connections and non-homogeneous activation functions, thereby resolving an open problem posed byJi & Telgarsky (2020).

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cai25m, title = {Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks}, author = {Cai, Yuhang and Zhou, Kangjie and Wu, Jingfeng and Mei, Song and Lindsey, Michael and Bartlett, Peter}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {6426--6504}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cai25m/cai25m.pdf}, url = {https://proceedings.mlr.press/v267/cai25m.html}, abstract = {We establish the asymptotic implicit bias of gradient descent (GD) for generic non-homogeneous deep networks under exponential loss. Specifically, we characterize three key properties of GD iterates starting from a sufficiently small empirical risk, where the threshold is determined by a measure of the network’s non-homogeneity. First, we show that a normalized margin induced by the GD iterates increases nearly monotonically. Second, we prove that while the norm of the GD iterates diverges to infinity, the iterates themselves converge in direction. Finally, we establish that this directional limit satisfies the Karush–Kuhn–Tucker (KKT) conditions of a margin maximization problem. Prior works on implicit bias have focused exclusively on homogeneous networks; in contrast, our results apply to a broad class of non-homogeneous networks satisfying a mild near-homogeneity condition. In particular, our results apply to networks with residual connections and non-homogeneous activation functions, thereby resolving an open problem posed byJi & Telgarsky (2020).} }
Endnote
%0 Conference Paper %T Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks %A Yuhang Cai %A Kangjie Zhou %A Jingfeng Wu %A Song Mei %A Michael Lindsey %A Peter Bartlett %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cai25m %I PMLR %P 6426--6504 %U https://proceedings.mlr.press/v267/cai25m.html %V 267 %X We establish the asymptotic implicit bias of gradient descent (GD) for generic non-homogeneous deep networks under exponential loss. Specifically, we characterize three key properties of GD iterates starting from a sufficiently small empirical risk, where the threshold is determined by a measure of the network’s non-homogeneity. First, we show that a normalized margin induced by the GD iterates increases nearly monotonically. Second, we prove that while the norm of the GD iterates diverges to infinity, the iterates themselves converge in direction. Finally, we establish that this directional limit satisfies the Karush–Kuhn–Tucker (KKT) conditions of a margin maximization problem. Prior works on implicit bias have focused exclusively on homogeneous networks; in contrast, our results apply to a broad class of non-homogeneous networks satisfying a mild near-homogeneity condition. In particular, our results apply to networks with residual connections and non-homogeneous activation functions, thereby resolving an open problem posed byJi & Telgarsky (2020).
APA
Cai, Y., Zhou, K., Wu, J., Mei, S., Lindsey, M. & Bartlett, P.. (2025). Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:6426-6504 Available from https://proceedings.mlr.press/v267/cai25m.html.

Related Material