Highly-Smooth Zero-th Order Online Optimization

Francis Bach, Vianney Perchet
29th Annual Conference on Learning Theory, PMLR 49:257-283, 2016.

Abstract

The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in many disciplines. In this paper we consider convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for both convex and strongly-convex functions, with finite horizon and anytime algorithms. Finally, we also recover similar results in the online optimization setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-bach16, title = {Highly-Smooth Zero-th Order Online Optimization}, author = {Bach, Francis and Perchet, Vianney}, booktitle = {29th Annual Conference on Learning Theory}, pages = {257--283}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/bach16.pdf}, url = {https://proceedings.mlr.press/v49/bach16.html}, abstract = {The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in many disciplines. In this paper we consider convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for both convex and strongly-convex functions, with finite horizon and anytime algorithms. Finally, we also recover similar results in the online optimization setting.} }
Endnote
%0 Conference Paper %T Highly-Smooth Zero-th Order Online Optimization %A Francis Bach %A Vianney Perchet %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-bach16 %I PMLR %P 257--283 %U https://proceedings.mlr.press/v49/bach16.html %V 49 %X The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in many disciplines. In this paper we consider convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for both convex and strongly-convex functions, with finite horizon and anytime algorithms. Finally, we also recover similar results in the online optimization setting.
RIS
TY - CPAPER TI - Highly-Smooth Zero-th Order Online Optimization AU - Francis Bach AU - Vianney Perchet BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-bach16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 257 EP - 283 L1 - http://proceedings.mlr.press/v49/bach16.pdf UR - https://proceedings.mlr.press/v49/bach16.html AB - The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in many disciplines. In this paper we consider convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for both convex and strongly-convex functions, with finite horizon and anytime algorithms. Finally, we also recover similar results in the online optimization setting. ER -
APA
Bach, F. & Perchet, V.. (2016). Highly-Smooth Zero-th Order Online Optimization. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:257-283 Available from https://proceedings.mlr.press/v49/bach16.html.

Related Material