PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits

Yevgeny Seldin, Nicolò Cesa-Bianchi, Peter Auer, François Laviolette, John Shawe-Taylor
Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2, PMLR 26:98-111, 2012.

Abstract

We develop a new tool for data-dependent analysis of the exploration-exploitation trade-off in learning under limited feedback. Our tool is based on two main ingredients. The first ingredient is a new concentration inequality that makes it possible to control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. The second ingredient is an application of this inequality to the exploration-exploitation trade-off via importance weighted sampling. We apply the new tool to the stochastic multiarmed bandit problem, however, the main importance of this paper is the development and understanding of the new tool rather than improvement of existing algorithms for stochastic multiarmed bandits. In the follow-up work we demonstrate that the new tool can improve over state-of-the-art in structurally richer problems, such as stochastic multiarmed bandits with side information.

Cite this Paper


BibTeX
@InProceedings{pmlr-v26-seldin12a, title = {PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits}, author = {Seldin, Yevgeny and Cesa-Bianchi, Nicolò and Auer, Peter and Laviolette, François and Shawe-Taylor, John}, booktitle = {Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2}, pages = {98--111}, year = {2012}, editor = {Glowacka, Dorota and Dorard, Louis and Shawe-Taylor, John}, volume = {26}, series = {Proceedings of Machine Learning Research}, address = {Bellevue, Washington, USA}, month = {02 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v26/seldin12a/seldin12a.pdf}, url = {https://proceedings.mlr.press/v26/seldin12a.html}, abstract = {We develop a new tool for data-dependent analysis of the exploration-exploitation trade-off in learning under limited feedback. Our tool is based on two main ingredients. The first ingredient is a new concentration inequality that makes it possible to control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. The second ingredient is an application of this inequality to the exploration-exploitation trade-off via importance weighted sampling. We apply the new tool to the stochastic multiarmed bandit problem, however, the main importance of this paper is the development and understanding of the new tool rather than improvement of existing algorithms for stochastic multiarmed bandits. In the follow-up work we demonstrate that the new tool can improve over state-of-the-art in structurally richer problems, such as stochastic multiarmed bandits with side information.} }
Endnote
%0 Conference Paper %T PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits %A Yevgeny Seldin %A Nicolò Cesa-Bianchi %A Peter Auer %A François Laviolette %A John Shawe-Taylor %B Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2 %C Proceedings of Machine Learning Research %D 2012 %E Dorota Glowacka %E Louis Dorard %E John Shawe-Taylor %F pmlr-v26-seldin12a %I PMLR %P 98--111 %U https://proceedings.mlr.press/v26/seldin12a.html %V 26 %X We develop a new tool for data-dependent analysis of the exploration-exploitation trade-off in learning under limited feedback. Our tool is based on two main ingredients. The first ingredient is a new concentration inequality that makes it possible to control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. The second ingredient is an application of this inequality to the exploration-exploitation trade-off via importance weighted sampling. We apply the new tool to the stochastic multiarmed bandit problem, however, the main importance of this paper is the development and understanding of the new tool rather than improvement of existing algorithms for stochastic multiarmed bandits. In the follow-up work we demonstrate that the new tool can improve over state-of-the-art in structurally richer problems, such as stochastic multiarmed bandits with side information.
RIS
TY - CPAPER TI - PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits AU - Yevgeny Seldin AU - Nicolò Cesa-Bianchi AU - Peter Auer AU - François Laviolette AU - John Shawe-Taylor BT - Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2 DA - 2012/05/02 ED - Dorota Glowacka ED - Louis Dorard ED - John Shawe-Taylor ID - pmlr-v26-seldin12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 26 SP - 98 EP - 111 L1 - http://proceedings.mlr.press/v26/seldin12a/seldin12a.pdf UR - https://proceedings.mlr.press/v26/seldin12a.html AB - We develop a new tool for data-dependent analysis of the exploration-exploitation trade-off in learning under limited feedback. Our tool is based on two main ingredients. The first ingredient is a new concentration inequality that makes it possible to control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. The second ingredient is an application of this inequality to the exploration-exploitation trade-off via importance weighted sampling. We apply the new tool to the stochastic multiarmed bandit problem, however, the main importance of this paper is the development and understanding of the new tool rather than improvement of existing algorithms for stochastic multiarmed bandits. In the follow-up work we demonstrate that the new tool can improve over state-of-the-art in structurally richer problems, such as stochastic multiarmed bandits with side information. ER -
APA
Seldin, Y., Cesa-Bianchi, N., Auer, P., Laviolette, F. & Shawe-Taylor, J.. (2012). PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits. Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2, in Proceedings of Machine Learning Research 26:98-111 Available from https://proceedings.mlr.press/v26/seldin12a.html.

Related Material