Boundary Crossing for General Exponential Families

Odalric-Ambrym Maillard
Proceedings of the 28th International Conference on Algorithmic Learning Theory, PMLR 76:151-184, 2017.

Abstract

We consider parametric exponential families of dimension $K$ on the real line. We study a variant of boundary crossing probabilities coming from the multi-armed bandit literature, in the case when the real-valued distributions form an exponential family of dimension $K$. Formally, our result is a concentration inequality that bounds the probability that $\mathcal{B}^\psi(\hat \theta_n,\theta^\star)\geq f(t/n)/n$, where $\theta^\star$ is the parameter of an unknown target distribution, $\hat \theta_n$ is the empirical parameter estimate built from $n$ observations, $\psi$ is the log-partition function of the exponential family and $\mathcal{B}^\psi$ is the corresponding Bregman divergence. From the perspective of stochastic multi-armed bandits, we pay special attention to the case when the boundary function $f$ is logarithmic, as it enables to analyze the regret of the state-of-the-art KL-ucb and KL-ucb+ strategies, whose analysis was left open in such generality. Indeed, previous results only hold for the case when $K=1$, while we provide results for arbitrary finite dimension $K$, thus considerably extending the existing results. Perhaps surprisingly, we highlight that the proof techniques to achieve these strong results already existed three decades ago in the work of T.L. Lai, and were apparently forgotten in the bandit community. We provide a modern rewriting of these beautiful techniques that we believe are useful beyond the application to stochastic multi-armed bandits.

Cite this Paper


BibTeX
@InProceedings{pmlr-v76-maillard17a, title = {Boundary Crossing for General Exponential Families}, author = {Maillard, Odalric-Ambrym}, booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory}, pages = {151--184}, year = {2017}, editor = {Hanneke, Steve and Reyzin, Lev}, volume = {76}, series = {Proceedings of Machine Learning Research}, month = {15--17 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v76/maillard17a/maillard17a.pdf}, url = {https://proceedings.mlr.press/v76/maillard17a.html}, abstract = {We consider parametric exponential families of dimension $K$ on the real line. We study a variant of boundary crossing probabilities coming from the multi-armed bandit literature, in the case when the real-valued distributions form an exponential family of dimension $K$. Formally, our result is a concentration inequality that bounds the probability that $\mathcal{B}^\psi(\hat \theta_n,\theta^\star)\geq f(t/n)/n$, where $\theta^\star$ is the parameter of an unknown target distribution, $\hat \theta_n$ is the empirical parameter estimate built from $n$ observations, $\psi$ is the log-partition function of the exponential family and $\mathcal{B}^\psi$ is the corresponding Bregman divergence. From the perspective of stochastic multi-armed bandits, we pay special attention to the case when the boundary function $f$ is logarithmic, as it enables to analyze the regret of the state-of-the-art KL-ucb and KL-ucb+ strategies, whose analysis was left open in such generality. Indeed, previous results only hold for the case when $K=1$, while we provide results for arbitrary finite dimension $K$, thus considerably extending the existing results. Perhaps surprisingly, we highlight that the proof techniques to achieve these strong results already existed three decades ago in the work of T.L. Lai, and were apparently forgotten in the bandit community. We provide a modern rewriting of these beautiful techniques that we believe are useful beyond the application to stochastic multi-armed bandits.} }
Endnote
%0 Conference Paper %T Boundary Crossing for General Exponential Families %A Odalric-Ambrym Maillard %B Proceedings of the 28th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Steve Hanneke %E Lev Reyzin %F pmlr-v76-maillard17a %I PMLR %P 151--184 %U https://proceedings.mlr.press/v76/maillard17a.html %V 76 %X We consider parametric exponential families of dimension $K$ on the real line. We study a variant of boundary crossing probabilities coming from the multi-armed bandit literature, in the case when the real-valued distributions form an exponential family of dimension $K$. Formally, our result is a concentration inequality that bounds the probability that $\mathcal{B}^\psi(\hat \theta_n,\theta^\star)\geq f(t/n)/n$, where $\theta^\star$ is the parameter of an unknown target distribution, $\hat \theta_n$ is the empirical parameter estimate built from $n$ observations, $\psi$ is the log-partition function of the exponential family and $\mathcal{B}^\psi$ is the corresponding Bregman divergence. From the perspective of stochastic multi-armed bandits, we pay special attention to the case when the boundary function $f$ is logarithmic, as it enables to analyze the regret of the state-of-the-art KL-ucb and KL-ucb+ strategies, whose analysis was left open in such generality. Indeed, previous results only hold for the case when $K=1$, while we provide results for arbitrary finite dimension $K$, thus considerably extending the existing results. Perhaps surprisingly, we highlight that the proof techniques to achieve these strong results already existed three decades ago in the work of T.L. Lai, and were apparently forgotten in the bandit community. We provide a modern rewriting of these beautiful techniques that we believe are useful beyond the application to stochastic multi-armed bandits.
APA
Maillard, O.. (2017). Boundary Crossing for General Exponential Families. Proceedings of the 28th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 76:151-184 Available from https://proceedings.mlr.press/v76/maillard17a.html.

Related Material