Instance-dependent Regret Bounds for Dueling Bandits

Akshay Balsubramani, Zohar Karnin, Robert E. Schapire, Masrour Zoghi
29th Annual Conference on Learning Theory, PMLR 49:336-360, 2016.

Abstract

We study the multi-armed dueling bandit problem in which feedback is provided in the form of relative comparisons between pairs of actions, with the goal of eventually learning to select actions that are close to the best. Following Dudik et al. (2015), we aim for algorithms whose performance approaches that of the optimal randomized choice of actions, the von Neumann winner, expressly avoiding more restrictive assumptions, for instance, regarding the existence of a single best action (a Condorcet winner). In this general setting, the best known algorithms achieve regret O(\sqrtKT) in T rounds with K actions. In this paper, we present the first instance-dependent regret bounds for the general problem, focusing particularly on when the von Neumann winner is sparse. Specifically, we propose a new algorithm whose regret, relative to a unique von Neumann winner with sparsity s, is at most O(\sqrtsT), plus an instance-dependent constant. Thus, when the sparsity is much smaller than the total number of actions, our result indicates that learning can be substantially faster.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-balsubramani16, title = {Instance-dependent Regret Bounds for Dueling Bandits}, author = {Balsubramani, Akshay and Karnin, Zohar and Schapire, Robert E. and Zoghi, Masrour}, booktitle = {29th Annual Conference on Learning Theory}, pages = {336--360}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/balsubramani16.pdf}, url = {https://proceedings.mlr.press/v49/balsubramani16.html}, abstract = {We study the multi-armed dueling bandit problem in which feedback is provided in the form of relative comparisons between pairs of actions, with the goal of eventually learning to select actions that are close to the best. Following Dudik et al. (2015), we aim for algorithms whose performance approaches that of the optimal randomized choice of actions, the von Neumann winner, expressly avoiding more restrictive assumptions, for instance, regarding the existence of a single best action (a Condorcet winner). In this general setting, the best known algorithms achieve regret O(\sqrtKT) in T rounds with K actions. In this paper, we present the first instance-dependent regret bounds for the general problem, focusing particularly on when the von Neumann winner is sparse. Specifically, we propose a new algorithm whose regret, relative to a unique von Neumann winner with sparsity s, is at most O(\sqrtsT), plus an instance-dependent constant. Thus, when the sparsity is much smaller than the total number of actions, our result indicates that learning can be substantially faster.} }
Endnote
%0 Conference Paper %T Instance-dependent Regret Bounds for Dueling Bandits %A Akshay Balsubramani %A Zohar Karnin %A Robert E. Schapire %A Masrour Zoghi %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-balsubramani16 %I PMLR %P 336--360 %U https://proceedings.mlr.press/v49/balsubramani16.html %V 49 %X We study the multi-armed dueling bandit problem in which feedback is provided in the form of relative comparisons between pairs of actions, with the goal of eventually learning to select actions that are close to the best. Following Dudik et al. (2015), we aim for algorithms whose performance approaches that of the optimal randomized choice of actions, the von Neumann winner, expressly avoiding more restrictive assumptions, for instance, regarding the existence of a single best action (a Condorcet winner). In this general setting, the best known algorithms achieve regret O(\sqrtKT) in T rounds with K actions. In this paper, we present the first instance-dependent regret bounds for the general problem, focusing particularly on when the von Neumann winner is sparse. Specifically, we propose a new algorithm whose regret, relative to a unique von Neumann winner with sparsity s, is at most O(\sqrtsT), plus an instance-dependent constant. Thus, when the sparsity is much smaller than the total number of actions, our result indicates that learning can be substantially faster.
RIS
TY - CPAPER TI - Instance-dependent Regret Bounds for Dueling Bandits AU - Akshay Balsubramani AU - Zohar Karnin AU - Robert E. Schapire AU - Masrour Zoghi BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-balsubramani16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 336 EP - 360 L1 - http://proceedings.mlr.press/v49/balsubramani16.pdf UR - https://proceedings.mlr.press/v49/balsubramani16.html AB - We study the multi-armed dueling bandit problem in which feedback is provided in the form of relative comparisons between pairs of actions, with the goal of eventually learning to select actions that are close to the best. Following Dudik et al. (2015), we aim for algorithms whose performance approaches that of the optimal randomized choice of actions, the von Neumann winner, expressly avoiding more restrictive assumptions, for instance, regarding the existence of a single best action (a Condorcet winner). In this general setting, the best known algorithms achieve regret O(\sqrtKT) in T rounds with K actions. In this paper, we present the first instance-dependent regret bounds for the general problem, focusing particularly on when the von Neumann winner is sparse. Specifically, we propose a new algorithm whose regret, relative to a unique von Neumann winner with sparsity s, is at most O(\sqrtsT), plus an instance-dependent constant. Thus, when the sparsity is much smaller than the total number of actions, our result indicates that learning can be substantially faster. ER -
APA
Balsubramani, A., Karnin, Z., Schapire, R.E. & Zoghi, M.. (2016). Instance-dependent Regret Bounds for Dueling Bandits. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:336-360 Available from https://proceedings.mlr.press/v49/balsubramani16.html.

Related Material