Bounding the expected run-time of nonconvex optimization with early stopping

Thomas Flynn, Kwangmin Yu, Abid Malik, Nicholas D’Imperio, Shinjae Yoo
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:51-60, 2020.

Abstract

This work examines the convergence of stochastic gradient-based optimization algorithms that use early stopping based on a validation function. The form of early stopping we consider is that optimization terminates when the norm of the gradient of a validation function falls below a threshold. We derive conditions that guarantee this stopping rule is well-defined, and provide bounds on the expected number of iterations and gradient evaluations needed to meet this criterion. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach in the general setting of a first-order optimization algorithm, with possibly biased update directions subject to a geometric drift condition. We then derive bounds on the expected running time for early stopping variants of several algorithms, including stochastic gradient descent (SGD), decentralized SGD (DSGD), and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-flynn20a, title = {Bounding the expected run-time of nonconvex optimization with early stopping}, author = {Flynn, Thomas and Yu, Kwangmin and Malik, Abid and D'Imperio, Nicholas and Yoo, Shinjae}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {51--60}, year = {2020}, editor = {Jonas Peters and David Sontag}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/flynn20a/flynn20a.pdf}, url = { http://proceedings.mlr.press/v124/flynn20a.html }, abstract = { This work examines the convergence of stochastic gradient-based optimization algorithms that use early stopping based on a validation function. The form of early stopping we consider is that optimization terminates when the norm of the gradient of a validation function falls below a threshold. We derive conditions that guarantee this stopping rule is well-defined, and provide bounds on the expected number of iterations and gradient evaluations needed to meet this criterion. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach in the general setting of a first-order optimization algorithm, with possibly biased update directions subject to a geometric drift condition. We then derive bounds on the expected running time for early stopping variants of several algorithms, including stochastic gradient descent (SGD), decentralized SGD (DSGD), and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping.} }
Endnote
%0 Conference Paper %T Bounding the expected run-time of nonconvex optimization with early stopping %A Thomas Flynn %A Kwangmin Yu %A Abid Malik %A Nicholas D’Imperio %A Shinjae Yoo %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-flynn20a %I PMLR %P 51--60 %U http://proceedings.mlr.press/v124/flynn20a.html %V 124 %X This work examines the convergence of stochastic gradient-based optimization algorithms that use early stopping based on a validation function. The form of early stopping we consider is that optimization terminates when the norm of the gradient of a validation function falls below a threshold. We derive conditions that guarantee this stopping rule is well-defined, and provide bounds on the expected number of iterations and gradient evaluations needed to meet this criterion. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach in the general setting of a first-order optimization algorithm, with possibly biased update directions subject to a geometric drift condition. We then derive bounds on the expected running time for early stopping variants of several algorithms, including stochastic gradient descent (SGD), decentralized SGD (DSGD), and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping.
APA
Flynn, T., Yu, K., Malik, A., D’Imperio, N. & Yoo, S.. (2020). Bounding the expected run-time of nonconvex optimization with early stopping. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:51-60 Available from http://proceedings.mlr.press/v124/flynn20a.html .

Related Material