Sparsity-Based Interpolation of External, Internal and Swap Regret

Zhou Lu, Y Jennifer Sun, Zhiyu Zhang
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:3795-3828, 2025.

Abstract

Focusing on the expert problem in online learning, this paper studies the interpolation of several performance metrics via $\phi$-regret minimization, which measures the total loss of an algorithm by its regret with respect to an arbitrary action modification rule $\phi$. With $d$ experts and $T\gg d$ rounds in total, we present a single algorithm achieving the instance-adaptive $\phi$-regret bound $ \tilde O\left\{\min\left\{\sqrt{d-d^{\mathrm{unif}}_\phi+1},\sqrt{d-d^{\mathrm{self}}_\phi}\right\}\cdot\sqrt{T}\right\}, $ where $d^{\mathrm{unif}}_\phi$ is the maximum amount of experts modified identically by $\phi$, and $d^{\mathrm{self}}_\phi$ is the amount of experts that $\phi$ trivially modifies to themselves. By recovering the optimal $O(\sqrt{T\log d})$ external regret bound when $d^{\mathrm{unif}}_\phi=d$, the standard $\tilde O(\sqrt{T})$ internal regret bound when $d^{\mathrm{self}}_\phi=d-1$ and the optimal $\tilde O(\sqrt{dT})$ swap regret bound in the worst case, we improve upon existing algorithms in the intermediate regimes. In addition, the computational complexity of our algorithm matches that of the standard swap-regret minimization algorithm due to Blum and Mansour (2007). Technically, building on the well-known reduction from $\phi$-regret minimization to external regret minimization on stochastic matrices, our main idea is to further convert the latter to online linear regression using Haar-wavelet-inspired matrix features. Then, by associating the complexity of each $\phi$ instance with its sparsity under the feature representation, we apply techniques from comparator-adaptive online learning to exploit the sparsity in this regression subroutine.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-lu25a, title = {Sparsity-Based Interpolation of External, Internal and Swap Regret}, author = {Lu, Zhou and Sun, Y Jennifer and Zhang, Zhiyu}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {3795--3828}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/lu25a/lu25a.pdf}, url = {https://proceedings.mlr.press/v291/lu25a.html}, abstract = {Focusing on the expert problem in online learning, this paper studies the interpolation of several performance metrics via $\phi$-regret minimization, which measures the total loss of an algorithm by its regret with respect to an arbitrary action modification rule $\phi$. With $d$ experts and $T\gg d$ rounds in total, we present a single algorithm achieving the instance-adaptive $\phi$-regret bound $ \tilde O\left\{\min\left\{\sqrt{d-d^{\mathrm{unif}}_\phi+1},\sqrt{d-d^{\mathrm{self}}_\phi}\right\}\cdot\sqrt{T}\right\}, $ where $d^{\mathrm{unif}}_\phi$ is the maximum amount of experts modified identically by $\phi$, and $d^{\mathrm{self}}_\phi$ is the amount of experts that $\phi$ trivially modifies to themselves. By recovering the optimal $O(\sqrt{T\log d})$ external regret bound when $d^{\mathrm{unif}}_\phi=d$, the standard $\tilde O(\sqrt{T})$ internal regret bound when $d^{\mathrm{self}}_\phi=d-1$ and the optimal $\tilde O(\sqrt{dT})$ swap regret bound in the worst case, we improve upon existing algorithms in the intermediate regimes. In addition, the computational complexity of our algorithm matches that of the standard swap-regret minimization algorithm due to Blum and Mansour (2007). Technically, building on the well-known reduction from $\phi$-regret minimization to external regret minimization on stochastic matrices, our main idea is to further convert the latter to online linear regression using Haar-wavelet-inspired matrix features. Then, by associating the complexity of each $\phi$ instance with its sparsity under the feature representation, we apply techniques from comparator-adaptive online learning to exploit the sparsity in this regression subroutine.} }
Endnote
%0 Conference Paper %T Sparsity-Based Interpolation of External, Internal and Swap Regret %A Zhou Lu %A Y Jennifer Sun %A Zhiyu Zhang %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-lu25a %I PMLR %P 3795--3828 %U https://proceedings.mlr.press/v291/lu25a.html %V 291 %X Focusing on the expert problem in online learning, this paper studies the interpolation of several performance metrics via $\phi$-regret minimization, which measures the total loss of an algorithm by its regret with respect to an arbitrary action modification rule $\phi$. With $d$ experts and $T\gg d$ rounds in total, we present a single algorithm achieving the instance-adaptive $\phi$-regret bound $ \tilde O\left\{\min\left\{\sqrt{d-d^{\mathrm{unif}}_\phi+1},\sqrt{d-d^{\mathrm{self}}_\phi}\right\}\cdot\sqrt{T}\right\}, $ where $d^{\mathrm{unif}}_\phi$ is the maximum amount of experts modified identically by $\phi$, and $d^{\mathrm{self}}_\phi$ is the amount of experts that $\phi$ trivially modifies to themselves. By recovering the optimal $O(\sqrt{T\log d})$ external regret bound when $d^{\mathrm{unif}}_\phi=d$, the standard $\tilde O(\sqrt{T})$ internal regret bound when $d^{\mathrm{self}}_\phi=d-1$ and the optimal $\tilde O(\sqrt{dT})$ swap regret bound in the worst case, we improve upon existing algorithms in the intermediate regimes. In addition, the computational complexity of our algorithm matches that of the standard swap-regret minimization algorithm due to Blum and Mansour (2007). Technically, building on the well-known reduction from $\phi$-regret minimization to external regret minimization on stochastic matrices, our main idea is to further convert the latter to online linear regression using Haar-wavelet-inspired matrix features. Then, by associating the complexity of each $\phi$ instance with its sparsity under the feature representation, we apply techniques from comparator-adaptive online learning to exploit the sparsity in this regression subroutine.
APA
Lu, Z., Sun, Y.J. & Zhang, Z.. (2025). Sparsity-Based Interpolation of External, Internal and Swap Regret. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:3795-3828 Available from https://proceedings.mlr.press/v291/lu25a.html.

Related Material