[edit]
Phase Transition of Regret for Logistic Regression with Large Weights
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-28, 2026.
Abstract
In online learning, a learner receives data in rounds $1 \le t \le T$ and, at each round, predicts a label that is then compared to the true label, resulting in a loss. The total loss over $T$ rounds, when compared to the loss of the best expert from a class of experts, is called the regret. We study the *fixed-design* minimax regret for the best predictor and the worst label sequence, when the feature sequence is given in advance. This paper focuses on *logarithmic loss* over a class of experts $\mathcal{H}_{\mathbf{w}}$ parameterized by a $d$-dimensional weight vector $\mathbf{w}$, which can be unbounded and may increase with $T$. For bounded weights, it is known that the minimax regret can grow no faster than $(d/2)\log(TR^2/d)$; hence, the leading coefficient in front of $\log T$ can grow without control as $R$ increases. However, in this paper, we demonstrate a phase transition showing that, for $R \ge T$ and large (but constant) $d$, the minimax regret asymptotically equals $(d \pm 1)\log T + O(\log\log T)$ for a logistic-like expert class, which can be generalized to a broader family of experts. We prove our findings by introducing the so-called *splittable label sequences* that partition the weight space into $T^{d-1}$ regions (of equal sign for the scalar product of weights and features), coupled with tools from analytic combinatorics (e.g., Mellin transforms and the saddle-point method) and discrete geometry.