Leveraging Good Representations in Linear Contextual Bandits

Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8371-8380, 2021.

Abstract

The linear contextual bandit literature is mostly focused on the design of efficient learning algorithms for a given representation. However, a contextual bandit problem may admit multiple linear representations, each one with different characteristics that directly impact the regret of the learning algorithm. In particular, recent works showed that there exist “good” representations for which constant problem-dependent regret can be achieved. In this paper, we first provide a systematic analysis of the different definitions of “good” representations proposed in the literature. We then propose a novel selection algorithm able to adapt to the best representation in a set of $M$ candidates. We show that the regret is indeed never worse than the regret obtained by running \textsc{LinUCB} on best representation (up to a $\ln M$ factor). As a result, our algorithm achieves constant regret if a “good” representation is available in the set. Furthermore, we show the algorithm may still achieve constant regret by implicitly constructing a “good” representation, even when none of the initial representations is “good”. Finally, we validate our theoretical findings in a number of standard contextual bandit problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-papini21a, title = {Leveraging Good Representations in Linear Contextual Bandits}, author = {Papini, Matteo and Tirinzoni, Andrea and Restelli, Marcello and Lazaric, Alessandro and Pirotta, Matteo}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8371--8380}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/papini21a/papini21a.pdf}, url = {https://proceedings.mlr.press/v139/papini21a.html}, abstract = {The linear contextual bandit literature is mostly focused on the design of efficient learning algorithms for a given representation. However, a contextual bandit problem may admit multiple linear representations, each one with different characteristics that directly impact the regret of the learning algorithm. In particular, recent works showed that there exist “good” representations for which constant problem-dependent regret can be achieved. In this paper, we first provide a systematic analysis of the different definitions of “good” representations proposed in the literature. We then propose a novel selection algorithm able to adapt to the best representation in a set of $M$ candidates. We show that the regret is indeed never worse than the regret obtained by running \textsc{LinUCB} on best representation (up to a $\ln M$ factor). As a result, our algorithm achieves constant regret if a “good” representation is available in the set. Furthermore, we show the algorithm may still achieve constant regret by implicitly constructing a “good” representation, even when none of the initial representations is “good”. Finally, we validate our theoretical findings in a number of standard contextual bandit problems.} }
Endnote
%0 Conference Paper %T Leveraging Good Representations in Linear Contextual Bandits %A Matteo Papini %A Andrea Tirinzoni %A Marcello Restelli %A Alessandro Lazaric %A Matteo Pirotta %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-papini21a %I PMLR %P 8371--8380 %U https://proceedings.mlr.press/v139/papini21a.html %V 139 %X The linear contextual bandit literature is mostly focused on the design of efficient learning algorithms for a given representation. However, a contextual bandit problem may admit multiple linear representations, each one with different characteristics that directly impact the regret of the learning algorithm. In particular, recent works showed that there exist “good” representations for which constant problem-dependent regret can be achieved. In this paper, we first provide a systematic analysis of the different definitions of “good” representations proposed in the literature. We then propose a novel selection algorithm able to adapt to the best representation in a set of $M$ candidates. We show that the regret is indeed never worse than the regret obtained by running \textsc{LinUCB} on best representation (up to a $\ln M$ factor). As a result, our algorithm achieves constant regret if a “good” representation is available in the set. Furthermore, we show the algorithm may still achieve constant regret by implicitly constructing a “good” representation, even when none of the initial representations is “good”. Finally, we validate our theoretical findings in a number of standard contextual bandit problems.
APA
Papini, M., Tirinzoni, A., Restelli, M., Lazaric, A. & Pirotta, M.. (2021). Leveraging Good Representations in Linear Contextual Bandits. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:8371-8380 Available from https://proceedings.mlr.press/v139/papini21a.html.

Related Material