Asymptotically Optimal Information-Directed Sampling

Johannes Kirschner, Tor Lattimore, Claire Vernade, Csaba Szepesvari
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:2777-2821, 2021.

Abstract

We introduce a simple and efficient algorithm for stochastic linear bandits with finitely many actions that is asymptotically optimal and (nearly) worst-case optimal in finite time. The approach is based on the frequentist information-directed sampling (IDS) framework, with a surrogate for the information gain that is informed by the optimization problem that defines the asymptotic lower bound. Our analysis sheds light on how IDS balances the trade-off between regret and information and uncovers a surprising connection between the recently proposed primal-dual methods and the IDS algorithm. We demonstrate empirically that IDS is competitive with UCB in finite-time, and can be significantly better in the asymptotic regime.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-kirschner21a, title = {Asymptotically Optimal Information-Directed Sampling}, author = {Kirschner, Johannes and Lattimore, Tor and Vernade, Claire and Szepesvari, Csaba}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {2777--2821}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/kirschner21a/kirschner21a.pdf}, url = {https://proceedings.mlr.press/v134/kirschner21a.html}, abstract = {We introduce a simple and efficient algorithm for stochastic linear bandits with finitely many actions that is asymptotically optimal and (nearly) worst-case optimal in finite time. The approach is based on the frequentist information-directed sampling (IDS) framework, with a surrogate for the information gain that is informed by the optimization problem that defines the asymptotic lower bound. Our analysis sheds light on how IDS balances the trade-off between regret and information and uncovers a surprising connection between the recently proposed primal-dual methods and the IDS algorithm. We demonstrate empirically that IDS is competitive with UCB in finite-time, and can be significantly better in the asymptotic regime.} }
Endnote
%0 Conference Paper %T Asymptotically Optimal Information-Directed Sampling %A Johannes Kirschner %A Tor Lattimore %A Claire Vernade %A Csaba Szepesvari %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-kirschner21a %I PMLR %P 2777--2821 %U https://proceedings.mlr.press/v134/kirschner21a.html %V 134 %X We introduce a simple and efficient algorithm for stochastic linear bandits with finitely many actions that is asymptotically optimal and (nearly) worst-case optimal in finite time. The approach is based on the frequentist information-directed sampling (IDS) framework, with a surrogate for the information gain that is informed by the optimization problem that defines the asymptotic lower bound. Our analysis sheds light on how IDS balances the trade-off between regret and information and uncovers a surprising connection between the recently proposed primal-dual methods and the IDS algorithm. We demonstrate empirically that IDS is competitive with UCB in finite-time, and can be significantly better in the asymptotic regime.
APA
Kirschner, J., Lattimore, T., Vernade, C. & Szepesvari, C.. (2021). Asymptotically Optimal Information-Directed Sampling. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:2777-2821 Available from https://proceedings.mlr.press/v134/kirschner21a.html.

Related Material