High probability guarantees for stochastic convex optimization

Damek Davis, Dmitriy Drusvyatskiy
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1411-1427, 2020.

Abstract

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on “light-tail” noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-davis20a, title = {High probability guarantees for stochastic convex optimization}, author = {Davis, Damek and Drusvyatskiy, Dmitriy}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {1411--1427}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/davis20a/davis20a.pdf}, url = {https://proceedings.mlr.press/v125/davis20a.html}, abstract = { Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on “light-tail” noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.} }
Endnote
%0 Conference Paper %T High probability guarantees for stochastic convex optimization %A Damek Davis %A Dmitriy Drusvyatskiy %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-davis20a %I PMLR %P 1411--1427 %U https://proceedings.mlr.press/v125/davis20a.html %V 125 %X Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on “light-tail” noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.
APA
Davis, D. & Drusvyatskiy, D.. (2020). High probability guarantees for stochastic convex optimization. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:1411-1427 Available from https://proceedings.mlr.press/v125/davis20a.html.

Related Material