Online Ranking with Top-1 Feedback

Sougata Chaudhuri, Ambuj Tewari
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:129-137, 2015.

Abstract

We consider a setting where a system learns to rank a fixed set of m items. The goal is produce good item rankings for users with diverse interests who interact online with the system for T rounds. We consider a novel top-1 feedback model: at the end of each round, the relevance score for only the top ranked object is revealed. However, the performance of the system is judged on the entire ranked list. We provide a comprehensive set of results regarding learnability under this challenging setting. For PairwiseLoss and DCG, two popular ranking measures, we prove that the minimax regret is Θ(T^2/3). Moreover, the minimax regret is achievable using an efficient strategy that only spends O(m \log m) time per round. The same efficient strategy achieves O(T^2/3) regret for Precision@k. Surprisingly, we show that for normalized versions of these ranking measures, i.e., AUC, NDCG & MAP, no online ranking algorithm can have sublinear regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-chaudhuri15, title = {{Online Ranking with Top-1 Feedback}}, author = {Chaudhuri, Sougata and Tewari, Ambuj}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {129--137}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/chaudhuri15.pdf}, url = {https://proceedings.mlr.press/v38/chaudhuri15.html}, abstract = {We consider a setting where a system learns to rank a fixed set of m items. The goal is produce good item rankings for users with diverse interests who interact online with the system for T rounds. We consider a novel top-1 feedback model: at the end of each round, the relevance score for only the top ranked object is revealed. However, the performance of the system is judged on the entire ranked list. We provide a comprehensive set of results regarding learnability under this challenging setting. For PairwiseLoss and DCG, two popular ranking measures, we prove that the minimax regret is Θ(T^2/3). Moreover, the minimax regret is achievable using an efficient strategy that only spends O(m \log m) time per round. The same efficient strategy achieves O(T^2/3) regret for Precision@k. Surprisingly, we show that for normalized versions of these ranking measures, i.e., AUC, NDCG & MAP, no online ranking algorithm can have sublinear regret.} }
Endnote
%0 Conference Paper %T Online Ranking with Top-1 Feedback %A Sougata Chaudhuri %A Ambuj Tewari %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-chaudhuri15 %I PMLR %P 129--137 %U https://proceedings.mlr.press/v38/chaudhuri15.html %V 38 %X We consider a setting where a system learns to rank a fixed set of m items. The goal is produce good item rankings for users with diverse interests who interact online with the system for T rounds. We consider a novel top-1 feedback model: at the end of each round, the relevance score for only the top ranked object is revealed. However, the performance of the system is judged on the entire ranked list. We provide a comprehensive set of results regarding learnability under this challenging setting. For PairwiseLoss and DCG, two popular ranking measures, we prove that the minimax regret is Θ(T^2/3). Moreover, the minimax regret is achievable using an efficient strategy that only spends O(m \log m) time per round. The same efficient strategy achieves O(T^2/3) regret for Precision@k. Surprisingly, we show that for normalized versions of these ranking measures, i.e., AUC, NDCG & MAP, no online ranking algorithm can have sublinear regret.
RIS
TY - CPAPER TI - Online Ranking with Top-1 Feedback AU - Sougata Chaudhuri AU - Ambuj Tewari BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-chaudhuri15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 129 EP - 137 L1 - http://proceedings.mlr.press/v38/chaudhuri15.pdf UR - https://proceedings.mlr.press/v38/chaudhuri15.html AB - We consider a setting where a system learns to rank a fixed set of m items. The goal is produce good item rankings for users with diverse interests who interact online with the system for T rounds. We consider a novel top-1 feedback model: at the end of each round, the relevance score for only the top ranked object is revealed. However, the performance of the system is judged on the entire ranked list. We provide a comprehensive set of results regarding learnability under this challenging setting. For PairwiseLoss and DCG, two popular ranking measures, we prove that the minimax regret is Θ(T^2/3). Moreover, the minimax regret is achievable using an efficient strategy that only spends O(m \log m) time per round. The same efficient strategy achieves O(T^2/3) regret for Precision@k. Surprisingly, we show that for normalized versions of these ranking measures, i.e., AUC, NDCG & MAP, no online ranking algorithm can have sublinear regret. ER -
APA
Chaudhuri, S. & Tewari, A.. (2015). Online Ranking with Top-1 Feedback. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:129-137 Available from https://proceedings.mlr.press/v38/chaudhuri15.html.

Related Material