Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values

Chaoxu Zhou, Wenbo Gao, Donald Goldfarb
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:4150-4159, 2017.

Abstract

We propose a novel class of stochastic, adaptive methods for minimizing self-concordant functions which can be expressed as an expected value. These methods generate an estimate of the true objective function by taking the empirical mean over a sample drawn at each step, making the problem tractable. The use of adaptive step sizes eliminates the need for the user to supply a step size. Methods in this class include extensions of gradient descent (GD) and BFGS. We show that, given a suitable amount of sampling, the stochastic adaptive GD attains linear convergence in expectation, and with further sampling, the stochastic adaptive BFGS attains R-superlinear convergence. We present experiments showing that these methods compare favorably to SGD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-zhou17a, title = {Stochastic Adaptive Quasi-{N}ewton Methods for Minimizing Expected Values}, author = {Chaoxu Zhou and Wenbo Gao and Donald Goldfarb}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {4150--4159}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/zhou17a/zhou17a.pdf}, url = {https://proceedings.mlr.press/v70/zhou17a.html}, abstract = {We propose a novel class of stochastic, adaptive methods for minimizing self-concordant functions which can be expressed as an expected value. These methods generate an estimate of the true objective function by taking the empirical mean over a sample drawn at each step, making the problem tractable. The use of adaptive step sizes eliminates the need for the user to supply a step size. Methods in this class include extensions of gradient descent (GD) and BFGS. We show that, given a suitable amount of sampling, the stochastic adaptive GD attains linear convergence in expectation, and with further sampling, the stochastic adaptive BFGS attains R-superlinear convergence. We present experiments showing that these methods compare favorably to SGD.} }
Endnote
%0 Conference Paper %T Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values %A Chaoxu Zhou %A Wenbo Gao %A Donald Goldfarb %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-zhou17a %I PMLR %P 4150--4159 %U https://proceedings.mlr.press/v70/zhou17a.html %V 70 %X We propose a novel class of stochastic, adaptive methods for minimizing self-concordant functions which can be expressed as an expected value. These methods generate an estimate of the true objective function by taking the empirical mean over a sample drawn at each step, making the problem tractable. The use of adaptive step sizes eliminates the need for the user to supply a step size. Methods in this class include extensions of gradient descent (GD) and BFGS. We show that, given a suitable amount of sampling, the stochastic adaptive GD attains linear convergence in expectation, and with further sampling, the stochastic adaptive BFGS attains R-superlinear convergence. We present experiments showing that these methods compare favorably to SGD.
APA
Zhou, C., Gao, W. & Goldfarb, D.. (2017). Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:4150-4159 Available from https://proceedings.mlr.press/v70/zhou17a.html.

Related Material