[edit]
Active Ranking with Subset-wise Preferences
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:3312-3321, 2019.
Abstract
We consider the problem of probably approximately correct (PAC) ranking n items by adaptively eliciting subset-wise preference feedback. At each round, the learner chooses a subset of k items and observes stochastic feedback indicating preference information of the winner (most preferred) item of the chosen subset drawn according to a Plackett-Luce (PL) subset choice model unknown a priori. The objective is to identify an ϵ-optimal ranking of the n items with probability at least 1−δ. When the feedback in each subset round is a single Plackett-Luce-sampled item, we show (ϵ,δ)-PAC algorithms with a sample complexity of O(nϵ2lnnδ) rounds, which we establish as being order-optimal by exhibiting a matching sample complexity lower bound of Ω(nϵ2lnnδ)—this shows that there is essentially no improvement possible from the pairwise comparisons setting (k=2). When, however, it is possible to elicit top-m (≤k) ranking feedback according to the PL model from each adaptively chosen subset of size k, we show that an (ϵ,δ)-PAC ranking sample complexity of O(nmϵ2lnnδ) is achievable with explicit algorithms, which represents an m-wise reduction in sample complexity compared to the pairwise case. This again turns out to be order-wise unimprovable across the class of symmetric ranking algorithms. Our algorithms rely on a novel {pivot trick} to maintain only n itemwise score estimates, unlike O(n2) pairwise score estimates that has been used in prior work. We report results of numerical experiments that corroborate our findings.