Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods

Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):604-612, 2014.

Abstract

We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain computational tractability and limit memory requirements even for high dimensional optimization problems by storing and manipulating these quadratic approximations in a shared, time evolving, low dimensional subspace. This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation. Each update step requires only a single contributing function or minibatch evaluation (as in SGD), and each step is scaled using an approximate inverse Hessian and little to no adjustment of hyperparameters is required (as is typical for quasi-Newton methods). We experimentally demonstrate improved convergence on seven diverse optimization problems. The algorithm is released as open source Python and MATLAB packages.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-sohl-dicksteinb14, title = {Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods}, author = {Sohl-Dickstein, Jascha and Poole, Ben and Ganguli, Surya}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {604--612}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/sohl-dicksteinb14.pdf}, url = {https://proceedings.mlr.press/v32/sohl-dicksteinb14.html}, abstract = {We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain computational tractability and limit memory requirements even for high dimensional optimization problems by storing and manipulating these quadratic approximations in a shared, time evolving, low dimensional subspace. This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation. Each update step requires only a single contributing function or minibatch evaluation (as in SGD), and each step is scaled using an approximate inverse Hessian and little to no adjustment of hyperparameters is required (as is typical for quasi-Newton methods). We experimentally demonstrate improved convergence on seven diverse optimization problems. The algorithm is released as open source Python and MATLAB packages.} }
Endnote
%0 Conference Paper %T Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods %A Jascha Sohl-Dickstein %A Ben Poole %A Surya Ganguli %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-sohl-dicksteinb14 %I PMLR %P 604--612 %U https://proceedings.mlr.press/v32/sohl-dicksteinb14.html %V 32 %N 2 %X We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain computational tractability and limit memory requirements even for high dimensional optimization problems by storing and manipulating these quadratic approximations in a shared, time evolving, low dimensional subspace. This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation. Each update step requires only a single contributing function or minibatch evaluation (as in SGD), and each step is scaled using an approximate inverse Hessian and little to no adjustment of hyperparameters is required (as is typical for quasi-Newton methods). We experimentally demonstrate improved convergence on seven diverse optimization problems. The algorithm is released as open source Python and MATLAB packages.
RIS
TY - CPAPER TI - Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods AU - Jascha Sohl-Dickstein AU - Ben Poole AU - Surya Ganguli BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-sohl-dicksteinb14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 604 EP - 612 L1 - http://proceedings.mlr.press/v32/sohl-dicksteinb14.pdf UR - https://proceedings.mlr.press/v32/sohl-dicksteinb14.html AB - We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain computational tractability and limit memory requirements even for high dimensional optimization problems by storing and manipulating these quadratic approximations in a shared, time evolving, low dimensional subspace. This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation. Each update step requires only a single contributing function or minibatch evaluation (as in SGD), and each step is scaled using an approximate inverse Hessian and little to no adjustment of hyperparameters is required (as is typical for quasi-Newton methods). We experimentally demonstrate improved convergence on seven diverse optimization problems. The algorithm is released as open source Python and MATLAB packages. ER -
APA
Sohl-Dickstein, J., Poole, B. & Ganguli, S.. (2014). Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):604-612 Available from https://proceedings.mlr.press/v32/sohl-dicksteinb14.html.

Related Material