Efficient Online Bayesian Inference for Neural Bandits

Gerardo Duran-Martin, Aleyna Kara, Kevin Murphy
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6002-6021, 2022.

Abstract

In this paper we present a new algorithm for online (sequential) inference in Bayesian neural networks, and show its suitability for tackling contextual bandit problems. The key idea is to combine the extended Kalman filter (which locally linearizes the likelihood function at each time step) with a (learned or random) low-dimensional affine subspace for the parameters; the use of a subspace enables us to scale our algorithm to models with $\sim 1M$ parameters. While most other neural bandit methods need to store the entire past dataset in order to avoid the problem of “catastrophic forgetting”, our approach uses constant memory. This is possible because we represent uncertainty about all the parameters in the model, not just the final linear layer. We show good results on the “Deep Bayesian Bandit Showdown” benchmark, as well as MNIST and a recommender system.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-duran-martin22a, title = { Efficient Online Bayesian Inference for Neural Bandits }, author = {Duran-Martin, Gerardo and Kara, Aleyna and Murphy, Kevin}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {6002--6021}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/duran-martin22a/duran-martin22a.pdf}, url = {https://proceedings.mlr.press/v151/duran-martin22a.html}, abstract = { In this paper we present a new algorithm for online (sequential) inference in Bayesian neural networks, and show its suitability for tackling contextual bandit problems. The key idea is to combine the extended Kalman filter (which locally linearizes the likelihood function at each time step) with a (learned or random) low-dimensional affine subspace for the parameters; the use of a subspace enables us to scale our algorithm to models with $\sim 1M$ parameters. While most other neural bandit methods need to store the entire past dataset in order to avoid the problem of “catastrophic forgetting”, our approach uses constant memory. This is possible because we represent uncertainty about all the parameters in the model, not just the final linear layer. We show good results on the “Deep Bayesian Bandit Showdown” benchmark, as well as MNIST and a recommender system. } }
Endnote
%0 Conference Paper %T Efficient Online Bayesian Inference for Neural Bandits %A Gerardo Duran-Martin %A Aleyna Kara %A Kevin Murphy %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-duran-martin22a %I PMLR %P 6002--6021 %U https://proceedings.mlr.press/v151/duran-martin22a.html %V 151 %X In this paper we present a new algorithm for online (sequential) inference in Bayesian neural networks, and show its suitability for tackling contextual bandit problems. The key idea is to combine the extended Kalman filter (which locally linearizes the likelihood function at each time step) with a (learned or random) low-dimensional affine subspace for the parameters; the use of a subspace enables us to scale our algorithm to models with $\sim 1M$ parameters. While most other neural bandit methods need to store the entire past dataset in order to avoid the problem of “catastrophic forgetting”, our approach uses constant memory. This is possible because we represent uncertainty about all the parameters in the model, not just the final linear layer. We show good results on the “Deep Bayesian Bandit Showdown” benchmark, as well as MNIST and a recommender system.
APA
Duran-Martin, G., Kara, A. & Murphy, K.. (2022). Efficient Online Bayesian Inference for Neural Bandits . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:6002-6021 Available from https://proceedings.mlr.press/v151/duran-martin22a.html.

Related Material