Neural Contextual Bandits with UCB-based Exploration

Dongruo Zhou, Lihong Li, Quanquan Gu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11492-11502, 2020.

Abstract

We study the stochastic contextual bandit problem, where the reward is generated from an unknown function with additive noise. No assumption is made about the reward function other than boundedness. We propose a new algorithm, NeuralUCB, which leverages the representation power of deep neural networks and uses a neural network-based random feature mapping to construct an upper confidence bound (UCB) of reward for efficient exploration. We prove that, under standard assumptions, NeuralUCB achieves $\tilde O(\sqrt{T})$ regret, where $T$ is the number of rounds. To the best of our knowledge, it is the first neural network-based contextual bandit algorithm with a near-optimal regret guarantee. We also show the algorithm is empirically competitive against representative baselines in a number of benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhou20a, title = {Neural Contextual Bandits with {UCB}-based Exploration}, author = {Zhou, Dongruo and Li, Lihong and Gu, Quanquan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11492--11502}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhou20a/zhou20a.pdf}, url = { http://proceedings.mlr.press/v119/zhou20a.html }, abstract = {We study the stochastic contextual bandit problem, where the reward is generated from an unknown function with additive noise. No assumption is made about the reward function other than boundedness. We propose a new algorithm, NeuralUCB, which leverages the representation power of deep neural networks and uses a neural network-based random feature mapping to construct an upper confidence bound (UCB) of reward for efficient exploration. We prove that, under standard assumptions, NeuralUCB achieves $\tilde O(\sqrt{T})$ regret, where $T$ is the number of rounds. To the best of our knowledge, it is the first neural network-based contextual bandit algorithm with a near-optimal regret guarantee. We also show the algorithm is empirically competitive against representative baselines in a number of benchmarks.} }
Endnote
%0 Conference Paper %T Neural Contextual Bandits with UCB-based Exploration %A Dongruo Zhou %A Lihong Li %A Quanquan Gu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhou20a %I PMLR %P 11492--11502 %U http://proceedings.mlr.press/v119/zhou20a.html %V 119 %X We study the stochastic contextual bandit problem, where the reward is generated from an unknown function with additive noise. No assumption is made about the reward function other than boundedness. We propose a new algorithm, NeuralUCB, which leverages the representation power of deep neural networks and uses a neural network-based random feature mapping to construct an upper confidence bound (UCB) of reward for efficient exploration. We prove that, under standard assumptions, NeuralUCB achieves $\tilde O(\sqrt{T})$ regret, where $T$ is the number of rounds. To the best of our knowledge, it is the first neural network-based contextual bandit algorithm with a near-optimal regret guarantee. We also show the algorithm is empirically competitive against representative baselines in a number of benchmarks.
APA
Zhou, D., Li, L. & Gu, Q.. (2020). Neural Contextual Bandits with UCB-based Exploration. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11492-11502 Available from http://proceedings.mlr.press/v119/zhou20a.html .

Related Material