Regularized Contextual Bandits

Xavier Fontaine, Quentin Berthet, Vianney Perchet
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:2144-2153, 2019.

Abstract

We consider the stochastic contextual bandit problem with additional regularization. The motivation comes from problems where the policy of the agent must be close to some baseline policy which is known to perform well on the task. To tackle this problem we use a nonparametric model and propose an algorithm splitting the context space into bins, and solving simultaneously — and independently — regularized multi-armed bandit instances on each bin. We derive slow and fast rates of convergence, depending on the unknown complexity of the problem. We also consider a new relevant margin condition to get problem-independent convergence rates, ending up in intermediate convergence rates interpolating between the aforementioned slow and fast rates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-fontaine19a, title = {Regularized Contextual Bandits}, author = {Fontaine, Xavier and Berthet, Quentin and Perchet, Vianney}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {2144--2153}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/fontaine19a/fontaine19a.pdf}, url = {https://proceedings.mlr.press/v89/fontaine19a.html}, abstract = {We consider the stochastic contextual bandit problem with additional regularization. The motivation comes from problems where the policy of the agent must be close to some baseline policy which is known to perform well on the task. To tackle this problem we use a nonparametric model and propose an algorithm splitting the context space into bins, and solving simultaneously — and independently — regularized multi-armed bandit instances on each bin. We derive slow and fast rates of convergence, depending on the unknown complexity of the problem. We also consider a new relevant margin condition to get problem-independent convergence rates, ending up in intermediate convergence rates interpolating between the aforementioned slow and fast rates.} }
Endnote
%0 Conference Paper %T Regularized Contextual Bandits %A Xavier Fontaine %A Quentin Berthet %A Vianney Perchet %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-fontaine19a %I PMLR %P 2144--2153 %U https://proceedings.mlr.press/v89/fontaine19a.html %V 89 %X We consider the stochastic contextual bandit problem with additional regularization. The motivation comes from problems where the policy of the agent must be close to some baseline policy which is known to perform well on the task. To tackle this problem we use a nonparametric model and propose an algorithm splitting the context space into bins, and solving simultaneously — and independently — regularized multi-armed bandit instances on each bin. We derive slow and fast rates of convergence, depending on the unknown complexity of the problem. We also consider a new relevant margin condition to get problem-independent convergence rates, ending up in intermediate convergence rates interpolating between the aforementioned slow and fast rates.
APA
Fontaine, X., Berthet, Q. & Perchet, V.. (2019). Regularized Contextual Bandits. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:2144-2153 Available from https://proceedings.mlr.press/v89/fontaine19a.html.

Related Material