The Sample Complexity of Optimizing a Convex Function

Eric Balkanski, Yaron Singer
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:275-301, 2017.

Abstract

In this paper we study optimization from samples of convex functions. There are many scenarios in which we do not know the function we wish to optimize but can learn it from data. In such cases, we are interested in bounding the number of samples required to optimize the function. Our main result shows that in general, the number of samples required to obtain a non-trivial approximation to the optimum of a convex function is exponential in its dimension, even when the function is PAC-learnable. We also obtain strong lower bounds for strongly convex and Lipschitz continuous functions. On the positive side, we show that there are interesting classes of functions and distributions for which the sample complexity is polynomial in the dimension of the function.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-balkanski17a, title = {The Sample Complexity of Optimizing a Convex Function}, author = {Balkanski, Eric and Singer, Yaron}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {275--301}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/balkanski17a/balkanski17a.pdf}, url = {https://proceedings.mlr.press/v65/balkanski17a.html}, abstract = {In this paper we study optimization from samples of convex functions. There are many scenarios in which we do not know the function we wish to optimize but can learn it from data. In such cases, we are interested in bounding the number of samples required to optimize the function. Our main result shows that in general, the number of samples required to obtain a non-trivial approximation to the optimum of a convex function is exponential in its dimension, even when the function is PAC-learnable. We also obtain strong lower bounds for strongly convex and Lipschitz continuous functions. On the positive side, we show that there are interesting classes of functions and distributions for which the sample complexity is polynomial in the dimension of the function.} }
Endnote
%0 Conference Paper %T The Sample Complexity of Optimizing a Convex Function %A Eric Balkanski %A Yaron Singer %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-balkanski17a %I PMLR %P 275--301 %U https://proceedings.mlr.press/v65/balkanski17a.html %V 65 %X In this paper we study optimization from samples of convex functions. There are many scenarios in which we do not know the function we wish to optimize but can learn it from data. In such cases, we are interested in bounding the number of samples required to optimize the function. Our main result shows that in general, the number of samples required to obtain a non-trivial approximation to the optimum of a convex function is exponential in its dimension, even when the function is PAC-learnable. We also obtain strong lower bounds for strongly convex and Lipschitz continuous functions. On the positive side, we show that there are interesting classes of functions and distributions for which the sample complexity is polynomial in the dimension of the function.
APA
Balkanski, E. & Singer, Y.. (2017). The Sample Complexity of Optimizing a Convex Function. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:275-301 Available from https://proceedings.mlr.press/v65/balkanski17a.html.

Related Material