Infinite Action Contextual Bandits with Reusable Data Exhaust

Mark Rucker, Yinglun Zhu, Paul Mineiro
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:29259-29274, 2023.

Abstract

For infinite action contextual bandits, smoothed regret and reduction to regression results in state-of-the-art online performance with computational cost independent of the action set: unfortunately, the resulting data exhaust does not have well-defined importance-weights. This frustrates the execution of downstream data science processes such as offline model selection. In this paper we describe an online algorithm with an equivalent smoothed regret guarantee, but which generates well-defined importance weights: in exchange, the online computational cost increases, but only to order smoothness (i.e., still independent of the action set). This removes a key obstacle to adoption of smoothed regret in production scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-rucker23a, title = {Infinite Action Contextual Bandits with Reusable Data Exhaust}, author = {Rucker, Mark and Zhu, Yinglun and Mineiro, Paul}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {29259--29274}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/rucker23a/rucker23a.pdf}, url = {https://proceedings.mlr.press/v202/rucker23a.html}, abstract = {For infinite action contextual bandits, smoothed regret and reduction to regression results in state-of-the-art online performance with computational cost independent of the action set: unfortunately, the resulting data exhaust does not have well-defined importance-weights. This frustrates the execution of downstream data science processes such as offline model selection. In this paper we describe an online algorithm with an equivalent smoothed regret guarantee, but which generates well-defined importance weights: in exchange, the online computational cost increases, but only to order smoothness (i.e., still independent of the action set). This removes a key obstacle to adoption of smoothed regret in production scenarios.} }
Endnote
%0 Conference Paper %T Infinite Action Contextual Bandits with Reusable Data Exhaust %A Mark Rucker %A Yinglun Zhu %A Paul Mineiro %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-rucker23a %I PMLR %P 29259--29274 %U https://proceedings.mlr.press/v202/rucker23a.html %V 202 %X For infinite action contextual bandits, smoothed regret and reduction to regression results in state-of-the-art online performance with computational cost independent of the action set: unfortunately, the resulting data exhaust does not have well-defined importance-weights. This frustrates the execution of downstream data science processes such as offline model selection. In this paper we describe an online algorithm with an equivalent smoothed regret guarantee, but which generates well-defined importance weights: in exchange, the online computational cost increases, but only to order smoothness (i.e., still independent of the action set). This removes a key obstacle to adoption of smoothed regret in production scenarios.
APA
Rucker, M., Zhu, Y. & Mineiro, P.. (2023). Infinite Action Contextual Bandits with Reusable Data Exhaust. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:29259-29274 Available from https://proceedings.mlr.press/v202/rucker23a.html.

Related Material