Meta Learning in Bandits within shared affine Subspaces

Steven Bilaj, Sofien Dhouib, Setareh Maghsudi
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:523-531, 2024.

Abstract

We study the problem of meta-learning several contextual stochastic bandits tasks by leveraging their concentration around a low dimensional affine subspace, which we learn via online principal component analysis to reduce the expected regret over the encountered bandits. We propose and theoretically analyze two strategies that solve the problem: One based on the principle of optimism in the face of uncertainty and the other via Thompson sampling. Our framework is generic and includes previously proposed approaches as special cases. Besides, the empirical results show that our methods significantly reduce the regret on several bandit tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-bilaj24a, title = { Meta Learning in Bandits within shared affine Subspaces }, author = {Bilaj, Steven and Dhouib, Sofien and Maghsudi, Setareh}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {523--531}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/bilaj24a/bilaj24a.pdf}, url = {https://proceedings.mlr.press/v238/bilaj24a.html}, abstract = { We study the problem of meta-learning several contextual stochastic bandits tasks by leveraging their concentration around a low dimensional affine subspace, which we learn via online principal component analysis to reduce the expected regret over the encountered bandits. We propose and theoretically analyze two strategies that solve the problem: One based on the principle of optimism in the face of uncertainty and the other via Thompson sampling. Our framework is generic and includes previously proposed approaches as special cases. Besides, the empirical results show that our methods significantly reduce the regret on several bandit tasks. } }
Endnote
%0 Conference Paper %T Meta Learning in Bandits within shared affine Subspaces %A Steven Bilaj %A Sofien Dhouib %A Setareh Maghsudi %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-bilaj24a %I PMLR %P 523--531 %U https://proceedings.mlr.press/v238/bilaj24a.html %V 238 %X We study the problem of meta-learning several contextual stochastic bandits tasks by leveraging their concentration around a low dimensional affine subspace, which we learn via online principal component analysis to reduce the expected regret over the encountered bandits. We propose and theoretically analyze two strategies that solve the problem: One based on the principle of optimism in the face of uncertainty and the other via Thompson sampling. Our framework is generic and includes previously proposed approaches as special cases. Besides, the empirical results show that our methods significantly reduce the regret on several bandit tasks.
APA
Bilaj, S., Dhouib, S. & Maghsudi, S.. (2024). Meta Learning in Bandits within shared affine Subspaces . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:523-531 Available from https://proceedings.mlr.press/v238/bilaj24a.html.

Related Material