A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums

[edit]

Anton Rodomanov, Dmitry Kropotov ;
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2597-2605, 2016.

Abstract

We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.

Related Material