Contextual Bandits with Large Action Spaces: Made Practical

Yinglun Zhu, Dylan J Foster, John Langford, Paul Mineiro
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:27428-27453, 2022.

Abstract

A central problem in sequential decision making is to develop algorithms that are practical and computationally efficient, yet support the use of flexible, general-purpose models. Focusing on the contextual bandit problem, recent progress provides provably efficient algorithms with strong empirical performance when the number of possible alternatives (“actions”) is small, but guarantees for decision making in large, continuous action spaces have remained elusive, leading to a significant gap between theory and practice. We present the first efficient, general-purpose algorithm for contextual bandits with continuous, linearly structured action spaces. Our algorithm makes use of computational oracles for (i) supervised learning, and (ii) optimization over the action space, and achieves sample complexity, runtime, and memory independent of the size of the action space. In addition, it is simple and practical. We perform a large-scale empirical evaluation, and show that our approach typically enjoys superior performance and efficiency compared to standard baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhu22b, title = {Contextual Bandits with Large Action Spaces: Made Practical}, author = {Zhu, Yinglun and Foster, Dylan J and Langford, John and Mineiro, Paul}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {27428--27453}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhu22b/zhu22b.pdf}, url = {https://proceedings.mlr.press/v162/zhu22b.html}, abstract = {A central problem in sequential decision making is to develop algorithms that are practical and computationally efficient, yet support the use of flexible, general-purpose models. Focusing on the contextual bandit problem, recent progress provides provably efficient algorithms with strong empirical performance when the number of possible alternatives (“actions”) is small, but guarantees for decision making in large, continuous action spaces have remained elusive, leading to a significant gap between theory and practice. We present the first efficient, general-purpose algorithm for contextual bandits with continuous, linearly structured action spaces. Our algorithm makes use of computational oracles for (i) supervised learning, and (ii) optimization over the action space, and achieves sample complexity, runtime, and memory independent of the size of the action space. In addition, it is simple and practical. We perform a large-scale empirical evaluation, and show that our approach typically enjoys superior performance and efficiency compared to standard baselines.} }
Endnote
%0 Conference Paper %T Contextual Bandits with Large Action Spaces: Made Practical %A Yinglun Zhu %A Dylan J Foster %A John Langford %A Paul Mineiro %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhu22b %I PMLR %P 27428--27453 %U https://proceedings.mlr.press/v162/zhu22b.html %V 162 %X A central problem in sequential decision making is to develop algorithms that are practical and computationally efficient, yet support the use of flexible, general-purpose models. Focusing on the contextual bandit problem, recent progress provides provably efficient algorithms with strong empirical performance when the number of possible alternatives (“actions”) is small, but guarantees for decision making in large, continuous action spaces have remained elusive, leading to a significant gap between theory and practice. We present the first efficient, general-purpose algorithm for contextual bandits with continuous, linearly structured action spaces. Our algorithm makes use of computational oracles for (i) supervised learning, and (ii) optimization over the action space, and achieves sample complexity, runtime, and memory independent of the size of the action space. In addition, it is simple and practical. We perform a large-scale empirical evaluation, and show that our approach typically enjoys superior performance and efficiency compared to standard baselines.
APA
Zhu, Y., Foster, D.J., Langford, J. & Mineiro, P.. (2022). Contextual Bandits with Large Action Spaces: Made Practical. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:27428-27453 Available from https://proceedings.mlr.press/v162/zhu22b.html.

Related Material