Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model

Gi-Soo Kim, Myunghee Cho Paik
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3389-3397, 2019.

Abstract

Contextual multi-armed bandit (MAB) algorithms have been shown promising for maximizing cumulative rewards in sequential decision tasks such as news article recommendation systems, web page ad placement algorithms, and mobile health. However, most of the proposed contextual MAB algorithms assume linear relationships between the reward and the context of the action. This paper proposes a new contextual MAB algorithm for a relaxed, semiparametric reward model that supports nonstationarity. The proposed method is less restrictive, easier to implement and faster than two alternative algorithms that consider the same model, while achieving a tight regret upper bound. We prove that the high-probability upper bound of the regret incurred by the proposed algorithm has the same order as the Thompson sampling algorithm for linear reward models. The proposed and existing algorithms are evaluated via simulation and also applied to Yahoo! news article recommendation log data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-kim19d, title = {Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model}, author = {Kim, Gi-Soo and Paik, Myunghee Cho}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3389--3397}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/kim19d/kim19d.pdf}, url = {https://proceedings.mlr.press/v97/kim19d.html}, abstract = {Contextual multi-armed bandit (MAB) algorithms have been shown promising for maximizing cumulative rewards in sequential decision tasks such as news article recommendation systems, web page ad placement algorithms, and mobile health. However, most of the proposed contextual MAB algorithms assume linear relationships between the reward and the context of the action. This paper proposes a new contextual MAB algorithm for a relaxed, semiparametric reward model that supports nonstationarity. The proposed method is less restrictive, easier to implement and faster than two alternative algorithms that consider the same model, while achieving a tight regret upper bound. We prove that the high-probability upper bound of the regret incurred by the proposed algorithm has the same order as the Thompson sampling algorithm for linear reward models. The proposed and existing algorithms are evaluated via simulation and also applied to Yahoo! news article recommendation log data.} }
Endnote
%0 Conference Paper %T Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model %A Gi-Soo Kim %A Myunghee Cho Paik %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-kim19d %I PMLR %P 3389--3397 %U https://proceedings.mlr.press/v97/kim19d.html %V 97 %X Contextual multi-armed bandit (MAB) algorithms have been shown promising for maximizing cumulative rewards in sequential decision tasks such as news article recommendation systems, web page ad placement algorithms, and mobile health. However, most of the proposed contextual MAB algorithms assume linear relationships between the reward and the context of the action. This paper proposes a new contextual MAB algorithm for a relaxed, semiparametric reward model that supports nonstationarity. The proposed method is less restrictive, easier to implement and faster than two alternative algorithms that consider the same model, while achieving a tight regret upper bound. We prove that the high-probability upper bound of the regret incurred by the proposed algorithm has the same order as the Thompson sampling algorithm for linear reward models. The proposed and existing algorithms are evaluated via simulation and also applied to Yahoo! news article recommendation log data.
APA
Kim, G. & Paik, M.C.. (2019). Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3389-3397 Available from https://proceedings.mlr.press/v97/kim19d.html.

Related Material