[edit]
Suspicious Alignment of SGD:A Fine-Grained Step Size Condition Analysis
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-66, 2026.
Abstract
This paper explores the suspicious alignment phenomenon in stochastic gradient descent (SGD) under ill-conditioned optimization, where the Hessian spectrum splits into dominant and bulk subspaces. This phenomenon describes the behavior of gradient alignment in SGD updates. Specifically, during the initial phase of SGD updates, the alignment between the gradient and the dominant subspace tends to decrease. Subsequently, it enters a rising phase and eventually stabilizes in a high-alignment phase. The alignment is considered “suspicious” because, paradoxically, the projected gradient update along this highly-aligned dominant subspace proves ineffective at reducing the loss. The focus of this work is to give a fine-grained analysis in a high-dimensional quadratic setup about how step size selection produces this phenomenon. Our primary contribution can be summarized as follows: We propose a step-size condition theory revealing that in low-alignment regimes, an adaptive critical step size $\eta_t^\ast$ separates alignment-decreasing ($\eta_t < \eta_t^\ast$) from alignment-increasing ($\eta_t > \eta_t^\ast$) regimes, whereas in high-alignment regimes, the alignment is self-correcting and decreases regardless of the step size. We further show that under sufficient ill-conditioning, a step size interval exists where the loss in the bulk subspace decreases while the loss in the dominant subspace increases, which explains a recent empirical observation that projecting gradient updates to the dominant subspace is ineffective. Finally, based on this adaptive step-size theory, we prove that for a constant step size and large initialization, SGD exhibits this distinct two-phase behavior: an initial alignment-decreasing phase, followed by stabilization at high alignment.