Stochastic Simultaneous Optimistic Optimization

Michal Valko, Alexandra Carpentier, Rémi Munos
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):19-27, 2013.

Abstract

We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (Kleinberg et al., 2008; Bubeck et al., 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-valko13, title = {Stochastic Simultaneous Optimistic Optimization}, author = {Valko, Michal and Carpentier, Alexandra and Munos, Rémi}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {19--27}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/valko13.pdf}, url = {https://proceedings.mlr.press/v28/valko13.html}, abstract = {We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (Kleinberg et al., 2008; Bubeck et al., 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known.} }
Endnote
%0 Conference Paper %T Stochastic Simultaneous Optimistic Optimization %A Michal Valko %A Alexandra Carpentier %A Rémi Munos %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-valko13 %I PMLR %P 19--27 %U https://proceedings.mlr.press/v28/valko13.html %V 28 %N 2 %X We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (Kleinberg et al., 2008; Bubeck et al., 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known.
RIS
TY - CPAPER TI - Stochastic Simultaneous Optimistic Optimization AU - Michal Valko AU - Alexandra Carpentier AU - Rémi Munos BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-valko13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 2 SP - 19 EP - 27 L1 - http://proceedings.mlr.press/v28/valko13.pdf UR - https://proceedings.mlr.press/v28/valko13.html AB - We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (Kleinberg et al., 2008; Bubeck et al., 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known. ER -
APA
Valko, M., Carpentier, A. & Munos, R.. (2013). Stochastic Simultaneous Optimistic Optimization. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(2):19-27 Available from https://proceedings.mlr.press/v28/valko13.html.

Related Material