[edit]
On the Value of Prior in Online Learning to Rank
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6880-6892, 2022.
Abstract
This paper addresses the cold-start problem in online learning to rank (OLTR). We show both theoretically and empirically that priors improve the quality of ranked lists presented to users interactively based on user feedback. These priors can come in the form of unbiased estimates of the relevance of the ranked items, or more practically, can be obtained from offline-learned models. Our experiments show the effectiveness of priors in improving the short-term regret of tabular OLTR algorithms, based on Thompson sampling and BayesUCB.