A Parameter-Free Algorithm for Misspecified Linear Contextual Bandits

Kei Takemura, Shinji Ito, Daisuke Hatano, Hanna Sumita, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3367-3375, 2021.

Abstract

We investigate the misspecified linear contextual bandit (MLCB) problem, which is a generalization of the linear contextual bandit (LCB) problem. The MLCB problem is a decision-making problem in which a learner observes $d$-dimensional feature vectors, called arms, chooses an arm from $K$ arms, and then obtains a reward from the chosen arm in each round. The learner aims to maximize the sum of the rewards over $T$ rounds. In contrast to the LCB problem, the rewards in the MLCB problem may not be represented by a linear function in feature vectors; instead, it is approximated by a linear function with additive approximation parameter $\varepsilon \geq 0$. In this paper, we propose an algorithm that achieves $\tilde{O}(\sqrt{dT\log(K)} + \varepsilon\sqrt{d}T)$ regret, where $\tilde{O}(\cdot)$ ignores polylogarithmic factors in $d$ and $T$. This is the first algorithm that guarantees a high-probability regret bound for the MLCB problem without knowledge of the approximation parameter $\varepsilon$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-takemura21a, title = { A Parameter-Free Algorithm for Misspecified Linear Contextual Bandits }, author = {Takemura, Kei and Ito, Shinji and Hatano, Daisuke and Sumita, Hanna and Fukunaga, Takuro and Kakimura, Naonori and Kawarabayashi, Ken-ichi}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3367--3375}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/takemura21a/takemura21a.pdf}, url = {https://proceedings.mlr.press/v130/takemura21a.html}, abstract = { We investigate the misspecified linear contextual bandit (MLCB) problem, which is a generalization of the linear contextual bandit (LCB) problem. The MLCB problem is a decision-making problem in which a learner observes $d$-dimensional feature vectors, called arms, chooses an arm from $K$ arms, and then obtains a reward from the chosen arm in each round. The learner aims to maximize the sum of the rewards over $T$ rounds. In contrast to the LCB problem, the rewards in the MLCB problem may not be represented by a linear function in feature vectors; instead, it is approximated by a linear function with additive approximation parameter $\varepsilon \geq 0$. In this paper, we propose an algorithm that achieves $\tilde{O}(\sqrt{dT\log(K)} + \varepsilon\sqrt{d}T)$ regret, where $\tilde{O}(\cdot)$ ignores polylogarithmic factors in $d$ and $T$. This is the first algorithm that guarantees a high-probability regret bound for the MLCB problem without knowledge of the approximation parameter $\varepsilon$. } }
Endnote
%0 Conference Paper %T A Parameter-Free Algorithm for Misspecified Linear Contextual Bandits %A Kei Takemura %A Shinji Ito %A Daisuke Hatano %A Hanna Sumita %A Takuro Fukunaga %A Naonori Kakimura %A Ken-ichi Kawarabayashi %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-takemura21a %I PMLR %P 3367--3375 %U https://proceedings.mlr.press/v130/takemura21a.html %V 130 %X We investigate the misspecified linear contextual bandit (MLCB) problem, which is a generalization of the linear contextual bandit (LCB) problem. The MLCB problem is a decision-making problem in which a learner observes $d$-dimensional feature vectors, called arms, chooses an arm from $K$ arms, and then obtains a reward from the chosen arm in each round. The learner aims to maximize the sum of the rewards over $T$ rounds. In contrast to the LCB problem, the rewards in the MLCB problem may not be represented by a linear function in feature vectors; instead, it is approximated by a linear function with additive approximation parameter $\varepsilon \geq 0$. In this paper, we propose an algorithm that achieves $\tilde{O}(\sqrt{dT\log(K)} + \varepsilon\sqrt{d}T)$ regret, where $\tilde{O}(\cdot)$ ignores polylogarithmic factors in $d$ and $T$. This is the first algorithm that guarantees a high-probability regret bound for the MLCB problem without knowledge of the approximation parameter $\varepsilon$.
APA
Takemura, K., Ito, S., Hatano, D., Sumita, H., Fukunaga, T., Kakimura, N. & Kawarabayashi, K.. (2021). A Parameter-Free Algorithm for Misspecified Linear Contextual Bandits . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3367-3375 Available from https://proceedings.mlr.press/v130/takemura21a.html.

Related Material