A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums

Anton Rodomanov, Dmitry Kropotov
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2597-2605, 2016.

Abstract

We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-rodomanov16, title = {A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums}, author = {Rodomanov, Anton and Kropotov, Dmitry}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2597--2605}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/rodomanov16.pdf}, url = {https://proceedings.mlr.press/v48/rodomanov16.html}, abstract = {We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.} }
Endnote
%0 Conference Paper %T A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums %A Anton Rodomanov %A Dmitry Kropotov %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-rodomanov16 %I PMLR %P 2597--2605 %U https://proceedings.mlr.press/v48/rodomanov16.html %V 48 %X We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.
RIS
TY - CPAPER TI - A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums AU - Anton Rodomanov AU - Dmitry Kropotov BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-rodomanov16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2597 EP - 2605 L1 - http://proceedings.mlr.press/v48/rodomanov16.pdf UR - https://proceedings.mlr.press/v48/rodomanov16.html AB - We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables. ER -
APA
Rodomanov, A. & Kropotov, D.. (2016). A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2597-2605 Available from https://proceedings.mlr.press/v48/rodomanov16.html.

Related Material