Recommendation Systems with Distribution-Free Reliability Guarantees

Anastasios N Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang, Michael I Jordan
Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 204:175-193, 2023.

Abstract

When building recommendation systems, we seek to output a helpful set of items to the user. Under the hood, a ranking model predicts which of two candidate items is better, and we must distill these pairwise comparisons into the user-facing output. However, a learned ranking model is never perfect, so taking its predictions at face value gives no guarantee that the user-facing output is reliable. Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items. Our procedure endows any ranking model with rigorous nite-sample control of the false discovery rate (FDR), regardless of the (unknown) data distribution. Moreover, our calibration algorithm enables the easy and principled integration of multiple objectives in recommender systems. As an example, we show how to optimize for recommendation diversity subject to a user-speci ed level of FDR control, circumventing the need to specify ad hoc weights of a diversity loss against an accuracy loss. Throughout, we focus on the problem of learning to rank a set of possible recommendations, evaluating our methods on the Yahoo! Learning to Rank and MSMarco datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v204-angelopoulos23a, title = {Recommendation Systems with Distribution-Free Reliability Guarantees}, author = {Angelopoulos, Anastasios N and Krauth, Karl and Bates, Stephen and Wang, Yixin and Jordan, Michael I}, booktitle = {Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications}, pages = {175--193}, year = {2023}, editor = {Papadopoulos, Harris and Nguyen, Khuong An and Boström, Henrik and Carlsson, Lars}, volume = {204}, series = {Proceedings of Machine Learning Research}, month = {13--15 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v204/angelopoulos23a/angelopoulos23a.pdf}, url = {https://proceedings.mlr.press/v204/angelopoulos23a.html}, abstract = {When building recommendation systems, we seek to output a helpful set of items to the user. Under the hood, a ranking model predicts which of two candidate items is better, and we must distill these pairwise comparisons into the user-facing output. However, a learned ranking model is never perfect, so taking its predictions at face value gives no guarantee that the user-facing output is reliable. Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items. Our procedure endows any ranking model with rigorous nite-sample control of the false discovery rate (FDR), regardless of the (unknown) data distribution. Moreover, our calibration algorithm enables the easy and principled integration of multiple objectives in recommender systems. As an example, we show how to optimize for recommendation diversity subject to a user-speci ed level of FDR control, circumventing the need to specify ad hoc weights of a diversity loss against an accuracy loss. Throughout, we focus on the problem of learning to rank a set of possible recommendations, evaluating our methods on the Yahoo! Learning to Rank and MSMarco datasets.} }
Endnote
%0 Conference Paper %T Recommendation Systems with Distribution-Free Reliability Guarantees %A Anastasios N Angelopoulos %A Karl Krauth %A Stephen Bates %A Yixin Wang %A Michael I Jordan %B Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications %C Proceedings of Machine Learning Research %D 2023 %E Harris Papadopoulos %E Khuong An Nguyen %E Henrik Boström %E Lars Carlsson %F pmlr-v204-angelopoulos23a %I PMLR %P 175--193 %U https://proceedings.mlr.press/v204/angelopoulos23a.html %V 204 %X When building recommendation systems, we seek to output a helpful set of items to the user. Under the hood, a ranking model predicts which of two candidate items is better, and we must distill these pairwise comparisons into the user-facing output. However, a learned ranking model is never perfect, so taking its predictions at face value gives no guarantee that the user-facing output is reliable. Building from a pre-trained ranking model, we show how to return a set of items that is rigorously guaranteed to contain mostly good items. Our procedure endows any ranking model with rigorous nite-sample control of the false discovery rate (FDR), regardless of the (unknown) data distribution. Moreover, our calibration algorithm enables the easy and principled integration of multiple objectives in recommender systems. As an example, we show how to optimize for recommendation diversity subject to a user-speci ed level of FDR control, circumventing the need to specify ad hoc weights of a diversity loss against an accuracy loss. Throughout, we focus on the problem of learning to rank a set of possible recommendations, evaluating our methods on the Yahoo! Learning to Rank and MSMarco datasets.
APA
Angelopoulos, A.N., Krauth, K., Bates, S., Wang, Y. & Jordan, M.I.. (2023). Recommendation Systems with Distribution-Free Reliability Guarantees. Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, in Proceedings of Machine Learning Research 204:175-193 Available from https://proceedings.mlr.press/v204/angelopoulos23a.html.

Related Material