[edit]
Full Swap Regret and Discretized Calibration
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:444-480, 2025.
Abstract
We study the problem of minimizing swap regret in structured normal-form games. Players have a very large (potentially infinite) number of pure actions, but each action has an embedding into d-dimensional space and payoffs are given by bilinear functions of these embeddings. We provide an efficient learning algorithm for this setting that incurs at most ˜O(T(d+1)/(d+3)) swap regret after T rounds. To achieve this, we introduce a new online learning problem we call full swap regret minimization. In this problem, a learner repeatedly takes a (randomized) action in a bounded convex d-dimensional action set K and then receives a loss from the adversary, with the goal of minimizing their regret with respect to the worst-case swap function mapping K to K. For varied assumptions about the convexity and smoothness of the loss functions, we design algorithms with full swap regret bounds ranging from O(Td/(d+2)) to O(T(d+1)/(d+2)). Finally, we apply these tools to the problem of online forecasting to minimize calibration error, showing that several notions of calibration can be viewed as specific instances of full swap regret. In particular, we design efficient algorithms for online forecasting that guarantee at most O(T1/3) ℓ2-calibration error and O(max discretized-calibration error (when the forecaster is restricted to predicting multiples of \epsilon).