Exploration via linearly perturbed loss minimisation

David Janz, Shuai Liu, Alex Ayoub, Csaba Szepesvári
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:721-729, 2024.

Abstract

We introduce \emph{exploration via linear loss perturbations} (EVILL), a randomised exploration method for structured stochastic bandit problems that works by solving for the minimiser of a linearly perturbed regularised negative log-likelihood function. We show that, for the case of generalised linear bandits, EVILL reduces to perturbed history exploration (PHE), a method where exploration is done by training on randomly perturbed rewards. In doing so, we provide a simple and clean explanation of when and why random reward perturbations give rise to good bandit algorithms. We propose data-dependent perturbations not present in previous PHE-type methods that allow EVILL to match the performance of Thompson-sampling-style parameter-perturbation methods, both in theory and in practice. Moreover, we show an example outside generalised linear bandits where PHE leads to inconsistent estimates, and thus linear regret, while EVILL remains performant. Like PHE, EVILL can be implemented in just a few lines of code.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-janz24a, title = { Exploration via linearly perturbed loss minimisation }, author = {Janz, David and Liu, Shuai and Ayoub, Alex and Szepesv\'{a}ri, Csaba}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {721--729}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/janz24a/janz24a.pdf}, url = {https://proceedings.mlr.press/v238/janz24a.html}, abstract = { We introduce \emph{exploration via linear loss perturbations} (EVILL), a randomised exploration method for structured stochastic bandit problems that works by solving for the minimiser of a linearly perturbed regularised negative log-likelihood function. We show that, for the case of generalised linear bandits, EVILL reduces to perturbed history exploration (PHE), a method where exploration is done by training on randomly perturbed rewards. In doing so, we provide a simple and clean explanation of when and why random reward perturbations give rise to good bandit algorithms. We propose data-dependent perturbations not present in previous PHE-type methods that allow EVILL to match the performance of Thompson-sampling-style parameter-perturbation methods, both in theory and in practice. Moreover, we show an example outside generalised linear bandits where PHE leads to inconsistent estimates, and thus linear regret, while EVILL remains performant. Like PHE, EVILL can be implemented in just a few lines of code. } }
Endnote
%0 Conference Paper %T Exploration via linearly perturbed loss minimisation %A David Janz %A Shuai Liu %A Alex Ayoub %A Csaba Szepesvári %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-janz24a %I PMLR %P 721--729 %U https://proceedings.mlr.press/v238/janz24a.html %V 238 %X We introduce \emph{exploration via linear loss perturbations} (EVILL), a randomised exploration method for structured stochastic bandit problems that works by solving for the minimiser of a linearly perturbed regularised negative log-likelihood function. We show that, for the case of generalised linear bandits, EVILL reduces to perturbed history exploration (PHE), a method where exploration is done by training on randomly perturbed rewards. In doing so, we provide a simple and clean explanation of when and why random reward perturbations give rise to good bandit algorithms. We propose data-dependent perturbations not present in previous PHE-type methods that allow EVILL to match the performance of Thompson-sampling-style parameter-perturbation methods, both in theory and in practice. Moreover, we show an example outside generalised linear bandits where PHE leads to inconsistent estimates, and thus linear regret, while EVILL remains performant. Like PHE, EVILL can be implemented in just a few lines of code.
APA
Janz, D., Liu, S., Ayoub, A. & Szepesvári, C.. (2024). Exploration via linearly perturbed loss minimisation . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:721-729 Available from https://proceedings.mlr.press/v238/janz24a.html.

Related Material