Thresholding Bandit Problem with Both Duels and Pulls

[edit]

Yichong Xu, Xi Chen, Aarti Singh, Artur Dubrawski ;
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2591-2600, 2020.

Abstract

The Thresholding Bandit Problem (TBP) aims to find the set of arms with mean rewards greater than a given threshold. We consider a new setting of TBP, where in addition to pulling arms, one can also duel two arms and get the arm with a greater mean. In our motivating application from crowdsourcing, dueling two arms can be more cost-effective and time-efficient than direct pulls. We refer to this problem as TBP with Dueling Choices (TBP-DC). This paper provides an algorithm called Rank-Search (RS) for solving TBP-DC by alternating between ranking and binary search. We prove theoretical guarantees for RS, and also give lower bounds to show the optimality of it. Experiments show that RS outperforms previous baseline algorithms that only use pulls or duels.

Related Material