Hierarchical Bayesian Bandits

Joey Hong, Branislav Kveton, Manzil Zaheer, Mohammad Ghavamzadeh
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7724-7741, 2022.

Abstract

Meta-, multi-task, and federated learning can be all viewed as solving similar tasks, drawn from a distribution that reflects task similarities. We provide a unified view of all these problems, as learning to act in a hierarchical Bayesian bandit. We propose and analyze a natural hierarchical Thompson sampling algorithm (HierTS) for this class of problems. Our regret bounds hold for many variants of the problems, including when the tasks are solved sequentially or in parallel; and show that the regret decreases with a more informative prior. Our proofs rely on a novel total variance decomposition that can be applied beyond our models. Our theory is complemented by experiments, which show that the hierarchy helps with knowledge sharing among the tasks. This confirms that hierarchical Bayesian bandits are a universal and statistically-efficient tool for learning to act with similar bandit tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-hong22c, title = { Hierarchical Bayesian Bandits }, author = {Hong, Joey and Kveton, Branislav and Zaheer, Manzil and Ghavamzadeh, Mohammad}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7724--7741}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/hong22c/hong22c.pdf}, url = {https://proceedings.mlr.press/v151/hong22c.html}, abstract = { Meta-, multi-task, and federated learning can be all viewed as solving similar tasks, drawn from a distribution that reflects task similarities. We provide a unified view of all these problems, as learning to act in a hierarchical Bayesian bandit. We propose and analyze a natural hierarchical Thompson sampling algorithm (HierTS) for this class of problems. Our regret bounds hold for many variants of the problems, including when the tasks are solved sequentially or in parallel; and show that the regret decreases with a more informative prior. Our proofs rely on a novel total variance decomposition that can be applied beyond our models. Our theory is complemented by experiments, which show that the hierarchy helps with knowledge sharing among the tasks. This confirms that hierarchical Bayesian bandits are a universal and statistically-efficient tool for learning to act with similar bandit tasks. } }
Endnote
%0 Conference Paper %T Hierarchical Bayesian Bandits %A Joey Hong %A Branislav Kveton %A Manzil Zaheer %A Mohammad Ghavamzadeh %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-hong22c %I PMLR %P 7724--7741 %U https://proceedings.mlr.press/v151/hong22c.html %V 151 %X Meta-, multi-task, and federated learning can be all viewed as solving similar tasks, drawn from a distribution that reflects task similarities. We provide a unified view of all these problems, as learning to act in a hierarchical Bayesian bandit. We propose and analyze a natural hierarchical Thompson sampling algorithm (HierTS) for this class of problems. Our regret bounds hold for many variants of the problems, including when the tasks are solved sequentially or in parallel; and show that the regret decreases with a more informative prior. Our proofs rely on a novel total variance decomposition that can be applied beyond our models. Our theory is complemented by experiments, which show that the hierarchy helps with knowledge sharing among the tasks. This confirms that hierarchical Bayesian bandits are a universal and statistically-efficient tool for learning to act with similar bandit tasks.
APA
Hong, J., Kveton, B., Zaheer, M. & Ghavamzadeh, M.. (2022). Hierarchical Bayesian Bandits . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7724-7741 Available from https://proceedings.mlr.press/v151/hong22c.html.

Related Material