Contextual Bandit Learning with Predictable Rewards

Alekh Agarwal, Miroslav Dudik, Satyen Kale, John Langford, Robert Schapire
; Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:19-26, 2012.

Abstract

Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-agarwal12, title = {Contextual Bandit Learning with Predictable Rewards}, author = {Alekh Agarwal and Miroslav Dudik and Satyen Kale and John Langford and Robert Schapire}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {19--26}, year = {2012}, editor = {Neil D. Lawrence and Mark Girolami}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/agarwal12/agarwal12.pdf}, url = {http://proceedings.mlr.press/v22/agarwal12.html}, abstract = {Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches.} }
Endnote
%0 Conference Paper %T Contextual Bandit Learning with Predictable Rewards %A Alekh Agarwal %A Miroslav Dudik %A Satyen Kale %A John Langford %A Robert Schapire %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-agarwal12 %I PMLR %J Proceedings of Machine Learning Research %P 19--26 %U http://proceedings.mlr.press %V 22 %W PMLR %X Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches.
RIS
TY - CPAPER TI - Contextual Bandit Learning with Predictable Rewards AU - Alekh Agarwal AU - Miroslav Dudik AU - Satyen Kale AU - John Langford AU - Robert Schapire BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics PY - 2012/03/21 DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-agarwal12 PB - PMLR SP - 19 DP - PMLR EP - 26 L1 - http://proceedings.mlr.press/v22/agarwal12/agarwal12.pdf UR - http://proceedings.mlr.press/v22/agarwal12.html AB - Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm–Regressor Elimination – with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for \emphany set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has \em constant regret unlike the previous approaches. ER -
APA
Agarwal, A., Dudik, M., Kale, S., Langford, J. & Schapire, R.. (2012). Contextual Bandit Learning with Predictable Rewards. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in PMLR 22:19-26

Related Material