Sublinear Optimal Policy Value Estimation in Contextual Bandits

Weihao Kong, Emma Brunskill, Gregory Valiant
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4377-4387, 2020.

Abstract

We study the problem of estimating the expected reward of the optimal policy in the stochastic disjoint linear bandit setting. We prove that for certain settings it is possible to obtain an accurate estimate of the optimal policy value even with a sublinear number of samples, where a linear set would be needed to reliably estimate the reward that can be obtained by any policy. We establish near matching information theoretic lower bounds, showing that our algorithm achieves near optimal estimation error. Finally, we demonstrate the effectiveness of our algorithm on joke recommendation and cancer inhibition dosage selection problems using real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-kong20b, title = {Sublinear Optimal Policy Value Estimation in Contextual Bandits}, author = {Kong, Weihao and Brunskill, Emma and Valiant, Gregory}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {4377--4387}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/kong20b/kong20b.pdf}, url = {https://proceedings.mlr.press/v108/kong20b.html}, abstract = {We study the problem of estimating the expected reward of the optimal policy in the stochastic disjoint linear bandit setting. We prove that for certain settings it is possible to obtain an accurate estimate of the optimal policy value even with a sublinear number of samples, where a linear set would be needed to reliably estimate the reward that can be obtained by any policy. We establish near matching information theoretic lower bounds, showing that our algorithm achieves near optimal estimation error. Finally, we demonstrate the effectiveness of our algorithm on joke recommendation and cancer inhibition dosage selection problems using real datasets. } }
Endnote
%0 Conference Paper %T Sublinear Optimal Policy Value Estimation in Contextual Bandits %A Weihao Kong %A Emma Brunskill %A Gregory Valiant %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-kong20b %I PMLR %P 4377--4387 %U https://proceedings.mlr.press/v108/kong20b.html %V 108 %X We study the problem of estimating the expected reward of the optimal policy in the stochastic disjoint linear bandit setting. We prove that for certain settings it is possible to obtain an accurate estimate of the optimal policy value even with a sublinear number of samples, where a linear set would be needed to reliably estimate the reward that can be obtained by any policy. We establish near matching information theoretic lower bounds, showing that our algorithm achieves near optimal estimation error. Finally, we demonstrate the effectiveness of our algorithm on joke recommendation and cancer inhibition dosage selection problems using real datasets.
APA
Kong, W., Brunskill, E. & Valiant, G.. (2020). Sublinear Optimal Policy Value Estimation in Contextual Bandits. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:4377-4387 Available from https://proceedings.mlr.press/v108/kong20b.html.

Related Material