Multi-scale exploration of convex functions and bandit convex optimization

Sébastien Bubeck, Ronen Eldan
; 29th Annual Conference on Learning Theory, PMLR 49:583-589, 2016.

Abstract

We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decade-old open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is \tildeO(\mathrmpoly(n) \sqrtT), where n is the dimension and T the number of rounds. This bound is obtained by studying the dual Bayesian maximin regret via the information ratio analysis of Russo and Van Roy, and then using the multi-scale exploration to construct a new algorithm for the Bayesian convex bandit problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-bubeck16, title = {Multi-scale exploration of convex functions and bandit convex optimization}, author = {Sébastien Bubeck and Ronen Eldan}, booktitle = {29th Annual Conference on Learning Theory}, pages = {583--589}, year = {2016}, editor = {Vitaly Feldman and Alexander Rakhlin and Ohad Shamir}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/bubeck16.pdf}, url = {http://proceedings.mlr.press/v49/bubeck16.html}, abstract = {We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decade-old open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is \tildeO(\mathrmpoly(n) \sqrtT), where n is the dimension and T the number of rounds. This bound is obtained by studying the dual Bayesian maximin regret via the information ratio analysis of Russo and Van Roy, and then using the multi-scale exploration to construct a new algorithm for the Bayesian convex bandit problem.} }
Endnote
%0 Conference Paper %T Multi-scale exploration of convex functions and bandit convex optimization %A Sébastien Bubeck %A Ronen Eldan %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-bubeck16 %I PMLR %J Proceedings of Machine Learning Research %P 583--589 %U http://proceedings.mlr.press %V 49 %W PMLR %X We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decade-old open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is \tildeO(\mathrmpoly(n) \sqrtT), where n is the dimension and T the number of rounds. This bound is obtained by studying the dual Bayesian maximin regret via the information ratio analysis of Russo and Van Roy, and then using the multi-scale exploration to construct a new algorithm for the Bayesian convex bandit problem.
RIS
TY - CPAPER TI - Multi-scale exploration of convex functions and bandit convex optimization AU - Sébastien Bubeck AU - Ronen Eldan BT - 29th Annual Conference on Learning Theory PY - 2016/06/06 DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-bubeck16 PB - PMLR SP - 583 DP - PMLR EP - 589 L1 - http://proceedings.mlr.press/v49/bubeck16.pdf UR - http://proceedings.mlr.press/v49/bubeck16.html AB - We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decade-old open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is \tildeO(\mathrmpoly(n) \sqrtT), where n is the dimension and T the number of rounds. This bound is obtained by studying the dual Bayesian maximin regret via the information ratio analysis of Russo and Van Roy, and then using the multi-scale exploration to construct a new algorithm for the Bayesian convex bandit problem. ER -
APA
Bubeck, S. & Eldan, R.. (2016). Multi-scale exploration of convex functions and bandit convex optimization. 29th Annual Conference on Learning Theory, in PMLR 49:583-589

Related Material