Dueling Bandits with Weak Regret
[edit]
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:731739, 2017.
Abstract
We consider online content recommendation with implicit feedback through pairwise comparisons, formalized as the socalled dueling bandit problem. We study the dueling bandit problem in the Condorcet winner setting, and consider two notions of regret: the more wellstudied strong regret, which is 0 only when both arms pulled are the Condorcet winner; and the less wellstudied weak regret, which is 0 if either arm pulled is the Condorcet winner. We propose a new algorithm for this problem, Winner Stays (WS), with variations for each kind of regret: WS for weak regret (WSW) has expected cumulative weak regret that is $O(N^2)$, and $O(N\log(N))$ if arms have a total order; WS for strong regret (WSS) has expected cumulative strong regret of $O(N^2 + N \log(T))$, and $O(N\log(N)+N\log(T))$ if arms have a total order. WSW is the first dueling bandit algorithm with weak regret that is constant in time. WS is simple to compute, even for problems with many arms, and we demonstrate through numerical experiments on simulated and real data that WS has significantly smaller regret than existing algorithms in both the weak and strongregret settings.
Related Material


