Contextual Dueling Bandits

Miroslav Dudík, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins, Masrour Zoghi
Proceedings of The 28th Conference on Learning Theory, PMLR 40:563-587, 2015.

Abstract

We consider the problem of learning to choose actions using contextual information when provided with limited feedback in the form of relative pairwise comparisons. We study this problem in the dueling-bandits framework of Yue et al. (COLT’09), which we extend to incorporate context. Roughly, the learner’s goal is to find the best policy, or way of behaving, in some space of policies, although “best” is not always so clearly defined. Here, we propose a new and natural solution concept, rooted in game theory, called a \emphvon Neumann winner, a randomized policy that beats or ties every other policy. We show that this notion overcomes important limitations of existing solutions, particularly the Condorcet winner which has typically been used in the past, but which requires strong and often unrealistic assumptions. We then present three \emphefficient algorithms for online learning in our setting, and for approximating a von Neumann winner from batch-like data. The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space. The other two algorithms require time and space only logarithmic in the size of the policy space when provided access to an oracle for solving classification problems on the space.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Dudik15, title = {Contextual Dueling Bandits}, author = {Dudík, Miroslav and Hofmann, Katja and Schapire, Robert E. and Slivkins, Aleksandrs and Zoghi, Masrour}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {563--587}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Dudik15.pdf}, url = {https://proceedings.mlr.press/v40/Dudik15.html}, abstract = {We consider the problem of learning to choose actions using contextual information when provided with limited feedback in the form of relative pairwise comparisons. We study this problem in the dueling-bandits framework of Yue et al. (COLT’09), which we extend to incorporate context. Roughly, the learner’s goal is to find the best policy, or way of behaving, in some space of policies, although “best” is not always so clearly defined. Here, we propose a new and natural solution concept, rooted in game theory, called a \emphvon Neumann winner, a randomized policy that beats or ties every other policy. We show that this notion overcomes important limitations of existing solutions, particularly the Condorcet winner which has typically been used in the past, but which requires strong and often unrealistic assumptions. We then present three \emphefficient algorithms for online learning in our setting, and for approximating a von Neumann winner from batch-like data. The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space. The other two algorithms require time and space only logarithmic in the size of the policy space when provided access to an oracle for solving classification problems on the space.} }
Endnote
%0 Conference Paper %T Contextual Dueling Bandits %A Miroslav Dudík %A Katja Hofmann %A Robert E. Schapire %A Aleksandrs Slivkins %A Masrour Zoghi %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Dudik15 %I PMLR %P 563--587 %U https://proceedings.mlr.press/v40/Dudik15.html %V 40 %X We consider the problem of learning to choose actions using contextual information when provided with limited feedback in the form of relative pairwise comparisons. We study this problem in the dueling-bandits framework of Yue et al. (COLT’09), which we extend to incorporate context. Roughly, the learner’s goal is to find the best policy, or way of behaving, in some space of policies, although “best” is not always so clearly defined. Here, we propose a new and natural solution concept, rooted in game theory, called a \emphvon Neumann winner, a randomized policy that beats or ties every other policy. We show that this notion overcomes important limitations of existing solutions, particularly the Condorcet winner which has typically been used in the past, but which requires strong and often unrealistic assumptions. We then present three \emphefficient algorithms for online learning in our setting, and for approximating a von Neumann winner from batch-like data. The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space. The other two algorithms require time and space only logarithmic in the size of the policy space when provided access to an oracle for solving classification problems on the space.
RIS
TY - CPAPER TI - Contextual Dueling Bandits AU - Miroslav Dudík AU - Katja Hofmann AU - Robert E. Schapire AU - Aleksandrs Slivkins AU - Masrour Zoghi BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Dudik15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 563 EP - 587 L1 - http://proceedings.mlr.press/v40/Dudik15.pdf UR - https://proceedings.mlr.press/v40/Dudik15.html AB - We consider the problem of learning to choose actions using contextual information when provided with limited feedback in the form of relative pairwise comparisons. We study this problem in the dueling-bandits framework of Yue et al. (COLT’09), which we extend to incorporate context. Roughly, the learner’s goal is to find the best policy, or way of behaving, in some space of policies, although “best” is not always so clearly defined. Here, we propose a new and natural solution concept, rooted in game theory, called a \emphvon Neumann winner, a randomized policy that beats or ties every other policy. We show that this notion overcomes important limitations of existing solutions, particularly the Condorcet winner which has typically been used in the past, but which requires strong and often unrealistic assumptions. We then present three \emphefficient algorithms for online learning in our setting, and for approximating a von Neumann winner from batch-like data. The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space. The other two algorithms require time and space only logarithmic in the size of the policy space when provided access to an oracle for solving classification problems on the space. ER -
APA
Dudík, M., Hofmann, K., Schapire, R.E., Slivkins, A. & Zoghi, M.. (2015). Contextual Dueling Bandits. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:563-587 Available from https://proceedings.mlr.press/v40/Dudik15.html.

Related Material