Recycling History: Efficient Recommendations from Contextual Dueling Bandits

Suryanarayana Sankagiri, Jalal Etesami, Pouria Fatemi, Matthias Grossglauser
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-20, 2026.

Abstract

The contextual dueling bandit problem models adaptive recommender systems, where at each step the algorithm presents a set of items to the user, and the user’s choice reveals their preference. This setup is well suited for implicit choices users make when navigating a content platform, but does not capture other possible comparison queries. Motivated by the fact that users provide more reliable feedback after consuming items, we propose a new bandit model that can be described as follows. The algorithm recommends one item per time step; after consuming that item, the user is asked to compare it with another item chosen from the user’s consumption history. Importantly, in our model, this comparison item can be chosen without incurring any additional regret, potentially leading to better performance. However, the regret analysis is challenging because of the temporal dependency in the user’s history. To overcome this challenge, we first show that the algorithm can construct informative queries provided the history is rich, i.e., satisfies a certain diversity condition. We then show that a short initial random exploration phase is sufficient for the algorithm to accumulate a rich history with high probability. This result, proven via matrix concentration bounds, yields $O(\sqrt{T})$ regret guarantees. Additionally, our simulations show that reusing past items for comparisons can lead to significantly lower regret than only comparing between simultaneously recommended items.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-sankagiri26a, title = {Recycling History: Efficient Recommendations from Contextual Dueling Bandits}, author = {Sankagiri, Suryanarayana and Etesami, Jalal and Fatemi, Pouria and Grossglauser, Matthias}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--20}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/sankagiri26a/sankagiri26a.pdf}, url = {https://proceedings.mlr.press/v313/sankagiri26a.html}, abstract = {The contextual dueling bandit problem models adaptive recommender systems, where at each step the algorithm presents a set of items to the user, and the user’s choice reveals their preference. This setup is well suited for implicit choices users make when navigating a content platform, but does not capture other possible comparison queries. Motivated by the fact that users provide more reliable feedback after consuming items, we propose a new bandit model that can be described as follows. The algorithm recommends one item per time step; after consuming that item, the user is asked to compare it with another item chosen from the user’s consumption history. Importantly, in our model, this comparison item can be chosen without incurring any additional regret, potentially leading to better performance. However, the regret analysis is challenging because of the temporal dependency in the user’s history. To overcome this challenge, we first show that the algorithm can construct informative queries provided the history is rich, i.e., satisfies a certain diversity condition. We then show that a short initial random exploration phase is sufficient for the algorithm to accumulate a rich history with high probability. This result, proven via matrix concentration bounds, yields $O(\sqrt{T})$ regret guarantees. Additionally, our simulations show that reusing past items for comparisons can lead to significantly lower regret than only comparing between simultaneously recommended items.} }
Endnote
%0 Conference Paper %T Recycling History: Efficient Recommendations from Contextual Dueling Bandits %A Suryanarayana Sankagiri %A Jalal Etesami %A Pouria Fatemi %A Matthias Grossglauser %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-sankagiri26a %I PMLR %P 1--20 %U https://proceedings.mlr.press/v313/sankagiri26a.html %V 313 %X The contextual dueling bandit problem models adaptive recommender systems, where at each step the algorithm presents a set of items to the user, and the user’s choice reveals their preference. This setup is well suited for implicit choices users make when navigating a content platform, but does not capture other possible comparison queries. Motivated by the fact that users provide more reliable feedback after consuming items, we propose a new bandit model that can be described as follows. The algorithm recommends one item per time step; after consuming that item, the user is asked to compare it with another item chosen from the user’s consumption history. Importantly, in our model, this comparison item can be chosen without incurring any additional regret, potentially leading to better performance. However, the regret analysis is challenging because of the temporal dependency in the user’s history. To overcome this challenge, we first show that the algorithm can construct informative queries provided the history is rich, i.e., satisfies a certain diversity condition. We then show that a short initial random exploration phase is sufficient for the algorithm to accumulate a rich history with high probability. This result, proven via matrix concentration bounds, yields $O(\sqrt{T})$ regret guarantees. Additionally, our simulations show that reusing past items for comparisons can lead to significantly lower regret than only comparing between simultaneously recommended items.
APA
Sankagiri, S., Etesami, J., Fatemi, P. & Grossglauser, M.. (2026). Recycling History: Efficient Recommendations from Contextual Dueling Bandits. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-20 Available from https://proceedings.mlr.press/v313/sankagiri26a.html.

Related Material