Data-Driven Online Model Selection With Regret Guarantees

Chris Dann, Claudio Gentile, Aldo Pacchiano
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1531-1539, 2024.

Abstract

We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner. Model selection is performed by regret balancing but, unlike the recent literature on this subject, we do not assume any prior knowledge about the base learners like candidate regret guarantees; instead, we uncover these quantities in a data-driven manner. The meta-learner is therefore able to leverage the *realized* regret incurred by each base learner for the learning environment at hand (as opposed to the *expected* regret), and single out the best such regret. We design two model selection algorithms operating with this more ambitious notion of regret and, besides proving model selection guarantees via regret balancing, we experimentally demonstrate the compelling practical benefits of dealing with actual regrets instead of candidate regret bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-dann24a, title = {Data-Driven Online Model Selection With Regret Guarantees}, author = {Dann, Chris and Gentile, Claudio and Pacchiano, Aldo}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1531--1539}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/dann24a/dann24a.pdf}, url = {https://proceedings.mlr.press/v238/dann24a.html}, abstract = {We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner. Model selection is performed by regret balancing but, unlike the recent literature on this subject, we do not assume any prior knowledge about the base learners like candidate regret guarantees; instead, we uncover these quantities in a data-driven manner. The meta-learner is therefore able to leverage the *realized* regret incurred by each base learner for the learning environment at hand (as opposed to the *expected* regret), and single out the best such regret. We design two model selection algorithms operating with this more ambitious notion of regret and, besides proving model selection guarantees via regret balancing, we experimentally demonstrate the compelling practical benefits of dealing with actual regrets instead of candidate regret bounds.} }
Endnote
%0 Conference Paper %T Data-Driven Online Model Selection With Regret Guarantees %A Chris Dann %A Claudio Gentile %A Aldo Pacchiano %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-dann24a %I PMLR %P 1531--1539 %U https://proceedings.mlr.press/v238/dann24a.html %V 238 %X We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner. Model selection is performed by regret balancing but, unlike the recent literature on this subject, we do not assume any prior knowledge about the base learners like candidate regret guarantees; instead, we uncover these quantities in a data-driven manner. The meta-learner is therefore able to leverage the *realized* regret incurred by each base learner for the learning environment at hand (as opposed to the *expected* regret), and single out the best such regret. We design two model selection algorithms operating with this more ambitious notion of regret and, besides proving model selection guarantees via regret balancing, we experimentally demonstrate the compelling practical benefits of dealing with actual regrets instead of candidate regret bounds.
APA
Dann, C., Gentile, C. & Pacchiano, A.. (2024). Data-Driven Online Model Selection With Regret Guarantees. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1531-1539 Available from https://proceedings.mlr.press/v238/dann24a.html.

Related Material