Data-Driven Upper Confidence Bounds with Near-Optimal Regret for Heavy-Tailed Bandits

Ambrus Tamás, Szabolcs Szentpéteri, Balázs Csáji
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2116-2124, 2025.

Abstract

Stochastic multi-armed bandits (MABs) provide a fundamental reinforcement learning model to study sequential decision making in uncertain environments. The upper confidence bounds (UCB) algorithm gave birth to the renaissance of bandit algorithms, as it achieves near-optimal regret rates under various moment assumptions. Up until recently most UCB methods relied on concentration inequalities leading to confidence bounds which depend on moment parameters, such as the variance proxy, that are usually unknown in practice. In this paper, we propose a new distribution-free, data-driven UCB algorithm for symmetric reward distributions, which needs no moment information. The key idea is to combine a refined, one-sided version of the recently developed resampled median-of-means (RMM) method with UCB. We prove a near-optimal regret bound for the proposed anytime, parameter-free RMM-UCB method, even for heavy-tailed distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-tamas25a, title = {Data-Driven Upper Confidence Bounds with Near-Optimal Regret for Heavy-Tailed Bandits}, author = {Tam{\'a}s, Ambrus and Szentp{\'e}teri, Szabolcs and Cs{\'a}ji, Bal{\'a}zs}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2116--2124}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/tamas25a/tamas25a.pdf}, url = {https://proceedings.mlr.press/v258/tamas25a.html}, abstract = {Stochastic multi-armed bandits (MABs) provide a fundamental reinforcement learning model to study sequential decision making in uncertain environments. The upper confidence bounds (UCB) algorithm gave birth to the renaissance of bandit algorithms, as it achieves near-optimal regret rates under various moment assumptions. Up until recently most UCB methods relied on concentration inequalities leading to confidence bounds which depend on moment parameters, such as the variance proxy, that are usually unknown in practice. In this paper, we propose a new distribution-free, data-driven UCB algorithm for symmetric reward distributions, which needs no moment information. The key idea is to combine a refined, one-sided version of the recently developed resampled median-of-means (RMM) method with UCB. We prove a near-optimal regret bound for the proposed anytime, parameter-free RMM-UCB method, even for heavy-tailed distributions.} }
Endnote
%0 Conference Paper %T Data-Driven Upper Confidence Bounds with Near-Optimal Regret for Heavy-Tailed Bandits %A Ambrus Tamás %A Szabolcs Szentpéteri %A Balázs Csáji %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-tamas25a %I PMLR %P 2116--2124 %U https://proceedings.mlr.press/v258/tamas25a.html %V 258 %X Stochastic multi-armed bandits (MABs) provide a fundamental reinforcement learning model to study sequential decision making in uncertain environments. The upper confidence bounds (UCB) algorithm gave birth to the renaissance of bandit algorithms, as it achieves near-optimal regret rates under various moment assumptions. Up until recently most UCB methods relied on concentration inequalities leading to confidence bounds which depend on moment parameters, such as the variance proxy, that are usually unknown in practice. In this paper, we propose a new distribution-free, data-driven UCB algorithm for symmetric reward distributions, which needs no moment information. The key idea is to combine a refined, one-sided version of the recently developed resampled median-of-means (RMM) method with UCB. We prove a near-optimal regret bound for the proposed anytime, parameter-free RMM-UCB method, even for heavy-tailed distributions.
APA
Tamás, A., Szentpéteri, S. & Csáji, B.. (2025). Data-Driven Upper Confidence Bounds with Near-Optimal Regret for Heavy-Tailed Bandits. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2116-2124 Available from https://proceedings.mlr.press/v258/tamas25a.html.

Related Material