Contextual Multi-Armed Bandits

Tyler Lu, David Pal, Martin Pal
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:485-492, 2010.

Abstract

We study contextual multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a contextual multi-armed bandit problem models a situation where, in a sequence of independent trials, an online algorithm chooses, based on a given context (side information), an action from a set of possible actions so as to maximize the total payoff of the chosen actions. The payoff depends on both the action chosen and the context. In contrast, context-free multi-armed bandit problems, a focus of much previous research, model situations where no side information is available and the payoff depends only on the action chosen. Our problem is motivated by sponsored web search, where the task is to display ads to a user of an Internet search engine based on her search query so as to maximize the click-through rate (CTR) of the ads displayed. We cast this problem as a contextual multi-armed bandit problem where queries and ads form metric spaces and the payoff function is Lipschitz with respect to both the metrics. For any ε> 0 we present an algorithm with regret O(T^\fraca+b+1a+b+2 + ε) where a,b are the covering dimensions of the query space and the ad space respectively. We prove a lower bound Ω(T^\frac\tildea+\tildeb+1\tildea+\tildeb+2 ε) for the regret of any algorithm where \tildea, \tildeb are packing dimensions of the query spaces and the ad space respectively. For finite spaces or convex bounded subsets of Euclidean spaces, this gives an almost matching upper and lower bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-lu10a, title = {Contextual Multi-Armed Bandits}, author = {Lu, Tyler and Pal, David and Pal, Martin}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {485--492}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/lu10a/lu10a.pdf}, url = {https://proceedings.mlr.press/v9/lu10a.html}, abstract = {We study contextual multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a contextual multi-armed bandit problem models a situation where, in a sequence of independent trials, an online algorithm chooses, based on a given context (side information), an action from a set of possible actions so as to maximize the total payoff of the chosen actions. The payoff depends on both the action chosen and the context. In contrast, context-free multi-armed bandit problems, a focus of much previous research, model situations where no side information is available and the payoff depends only on the action chosen. Our problem is motivated by sponsored web search, where the task is to display ads to a user of an Internet search engine based on her search query so as to maximize the click-through rate (CTR) of the ads displayed. We cast this problem as a contextual multi-armed bandit problem where queries and ads form metric spaces and the payoff function is Lipschitz with respect to both the metrics. For any ε> 0 we present an algorithm with regret O(T^\fraca+b+1a+b+2 + ε) where a,b are the covering dimensions of the query space and the ad space respectively. We prove a lower bound Ω(T^\frac\tildea+\tildeb+1\tildea+\tildeb+2 ε) for the regret of any algorithm where \tildea, \tildeb are packing dimensions of the query spaces and the ad space respectively. For finite spaces or convex bounded subsets of Euclidean spaces, this gives an almost matching upper and lower bound.} }
Endnote
%0 Conference Paper %T Contextual Multi-Armed Bandits %A Tyler Lu %A David Pal %A Martin Pal %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-lu10a %I PMLR %P 485--492 %U https://proceedings.mlr.press/v9/lu10a.html %V 9 %X We study contextual multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a contextual multi-armed bandit problem models a situation where, in a sequence of independent trials, an online algorithm chooses, based on a given context (side information), an action from a set of possible actions so as to maximize the total payoff of the chosen actions. The payoff depends on both the action chosen and the context. In contrast, context-free multi-armed bandit problems, a focus of much previous research, model situations where no side information is available and the payoff depends only on the action chosen. Our problem is motivated by sponsored web search, where the task is to display ads to a user of an Internet search engine based on her search query so as to maximize the click-through rate (CTR) of the ads displayed. We cast this problem as a contextual multi-armed bandit problem where queries and ads form metric spaces and the payoff function is Lipschitz with respect to both the metrics. For any ε> 0 we present an algorithm with regret O(T^\fraca+b+1a+b+2 + ε) where a,b are the covering dimensions of the query space and the ad space respectively. We prove a lower bound Ω(T^\frac\tildea+\tildeb+1\tildea+\tildeb+2 ε) for the regret of any algorithm where \tildea, \tildeb are packing dimensions of the query spaces and the ad space respectively. For finite spaces or convex bounded subsets of Euclidean spaces, this gives an almost matching upper and lower bound.
RIS
TY - CPAPER TI - Contextual Multi-Armed Bandits AU - Tyler Lu AU - David Pal AU - Martin Pal BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-lu10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 485 EP - 492 L1 - http://proceedings.mlr.press/v9/lu10a/lu10a.pdf UR - https://proceedings.mlr.press/v9/lu10a.html AB - We study contextual multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a contextual multi-armed bandit problem models a situation where, in a sequence of independent trials, an online algorithm chooses, based on a given context (side information), an action from a set of possible actions so as to maximize the total payoff of the chosen actions. The payoff depends on both the action chosen and the context. In contrast, context-free multi-armed bandit problems, a focus of much previous research, model situations where no side information is available and the payoff depends only on the action chosen. Our problem is motivated by sponsored web search, where the task is to display ads to a user of an Internet search engine based on her search query so as to maximize the click-through rate (CTR) of the ads displayed. We cast this problem as a contextual multi-armed bandit problem where queries and ads form metric spaces and the payoff function is Lipschitz with respect to both the metrics. For any ε> 0 we present an algorithm with regret O(T^\fraca+b+1a+b+2 + ε) where a,b are the covering dimensions of the query space and the ad space respectively. We prove a lower bound Ω(T^\frac\tildea+\tildeb+1\tildea+\tildeb+2 ε) for the regret of any algorithm where \tildea, \tildeb are packing dimensions of the query spaces and the ad space respectively. For finite spaces or convex bounded subsets of Euclidean spaces, this gives an almost matching upper and lower bound. ER -
APA
Lu, T., Pal, D. & Pal, M.. (2010). Contextual Multi-Armed Bandits. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:485-492 Available from https://proceedings.mlr.press/v9/lu10a.html.

Related Material