DCM Bandits: Learning to Rank with Multiple Clicks

Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, Zheng Wen
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1215-1224, 2016.

Abstract

A search engine recommends to the user a list of web pages. The user examines this list, from the first page to the last, and clicks on all attractive pages until the user is satisfied. This behavior of the user can be described by the dependent click model (DCM). We propose DCM bandits, an online learning variant of the DCM where the goal is to maximize the probability of recommending satisfactory items, such as web pages. The main challenge of our learning problem is that we do not observe which attractive item is satisfactory. We propose a computationally-efficient learning algorithm for solving our problem, dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable assumptions; and also prove a matching lower bound up to logarithmic factors. We evaluate our algorithm on synthetic and real-world problems, and show that it performs well even when our model is misspecified. This work presents the first practical and regret-optimal online algorithm for learning to rank with multiple clicks in a cascade-like click model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-katariya16, title = {DCM Bandits: Learning to Rank with Multiple Clicks}, author = {Katariya, Sumeet and Kveton, Branislav and Szepesvari, Csaba and Wen, Zheng}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1215--1224}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/katariya16.pdf}, url = {https://proceedings.mlr.press/v48/katariya16.html}, abstract = {A search engine recommends to the user a list of web pages. The user examines this list, from the first page to the last, and clicks on all attractive pages until the user is satisfied. This behavior of the user can be described by the dependent click model (DCM). We propose DCM bandits, an online learning variant of the DCM where the goal is to maximize the probability of recommending satisfactory items, such as web pages. The main challenge of our learning problem is that we do not observe which attractive item is satisfactory. We propose a computationally-efficient learning algorithm for solving our problem, dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable assumptions; and also prove a matching lower bound up to logarithmic factors. We evaluate our algorithm on synthetic and real-world problems, and show that it performs well even when our model is misspecified. This work presents the first practical and regret-optimal online algorithm for learning to rank with multiple clicks in a cascade-like click model.} }
Endnote
%0 Conference Paper %T DCM Bandits: Learning to Rank with Multiple Clicks %A Sumeet Katariya %A Branislav Kveton %A Csaba Szepesvari %A Zheng Wen %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-katariya16 %I PMLR %P 1215--1224 %U https://proceedings.mlr.press/v48/katariya16.html %V 48 %X A search engine recommends to the user a list of web pages. The user examines this list, from the first page to the last, and clicks on all attractive pages until the user is satisfied. This behavior of the user can be described by the dependent click model (DCM). We propose DCM bandits, an online learning variant of the DCM where the goal is to maximize the probability of recommending satisfactory items, such as web pages. The main challenge of our learning problem is that we do not observe which attractive item is satisfactory. We propose a computationally-efficient learning algorithm for solving our problem, dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable assumptions; and also prove a matching lower bound up to logarithmic factors. We evaluate our algorithm on synthetic and real-world problems, and show that it performs well even when our model is misspecified. This work presents the first practical and regret-optimal online algorithm for learning to rank with multiple clicks in a cascade-like click model.
RIS
TY - CPAPER TI - DCM Bandits: Learning to Rank with Multiple Clicks AU - Sumeet Katariya AU - Branislav Kveton AU - Csaba Szepesvari AU - Zheng Wen BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-katariya16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1215 EP - 1224 L1 - http://proceedings.mlr.press/v48/katariya16.pdf UR - https://proceedings.mlr.press/v48/katariya16.html AB - A search engine recommends to the user a list of web pages. The user examines this list, from the first page to the last, and clicks on all attractive pages until the user is satisfied. This behavior of the user can be described by the dependent click model (DCM). We propose DCM bandits, an online learning variant of the DCM where the goal is to maximize the probability of recommending satisfactory items, such as web pages. The main challenge of our learning problem is that we do not observe which attractive item is satisfactory. We propose a computationally-efficient learning algorithm for solving our problem, dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable assumptions; and also prove a matching lower bound up to logarithmic factors. We evaluate our algorithm on synthetic and real-world problems, and show that it performs well even when our model is misspecified. This work presents the first practical and regret-optimal online algorithm for learning to rank with multiple clicks in a cascade-like click model. ER -
APA
Katariya, S., Kveton, B., Szepesvari, C. & Wen, Z.. (2016). DCM Bandits: Learning to Rank with Multiple Clicks. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1215-1224 Available from https://proceedings.mlr.press/v48/katariya16.html.

Related Material