A Composite Randomized Incremental Gradient Method

Junyu Zhang, Lin Xiao
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7454-7462, 2019.

Abstract

We consider the problem of minimizing the composition of a smooth function (which can be nonconvex) and a smooth vector mapping, where both of them can be express as the average of a large number of components. We propose a composite randomized incremental gradient method by extending the SAGA framework. The gradient sample complexity of our method matches that of several recently developed methods based on SVRG in the general case. However, for structured problems where linear convergence rates can be obtained, our method can be much better for ill-conditioned problems. In addition, when the finite-sum structure only appear for the inner mapping, the sample complexity of our method is the same as that of SAGA for minimizing finite sum of smooth nonconvex functions, despite the additional outer composition and the stochastic composite gradients being biased in our case.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhang19n, title = {A Composite Randomized Incremental Gradient Method}, author = {Zhang, Junyu and Xiao, Lin}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7454--7462}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhang19n/zhang19n.pdf}, url = {https://proceedings.mlr.press/v97/zhang19n.html}, abstract = {We consider the problem of minimizing the composition of a smooth function (which can be nonconvex) and a smooth vector mapping, where both of them can be express as the average of a large number of components. We propose a composite randomized incremental gradient method by extending the SAGA framework. The gradient sample complexity of our method matches that of several recently developed methods based on SVRG in the general case. However, for structured problems where linear convergence rates can be obtained, our method can be much better for ill-conditioned problems. In addition, when the finite-sum structure only appear for the inner mapping, the sample complexity of our method is the same as that of SAGA for minimizing finite sum of smooth nonconvex functions, despite the additional outer composition and the stochastic composite gradients being biased in our case.} }
Endnote
%0 Conference Paper %T A Composite Randomized Incremental Gradient Method %A Junyu Zhang %A Lin Xiao %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhang19n %I PMLR %P 7454--7462 %U https://proceedings.mlr.press/v97/zhang19n.html %V 97 %X We consider the problem of minimizing the composition of a smooth function (which can be nonconvex) and a smooth vector mapping, where both of them can be express as the average of a large number of components. We propose a composite randomized incremental gradient method by extending the SAGA framework. The gradient sample complexity of our method matches that of several recently developed methods based on SVRG in the general case. However, for structured problems where linear convergence rates can be obtained, our method can be much better for ill-conditioned problems. In addition, when the finite-sum structure only appear for the inner mapping, the sample complexity of our method is the same as that of SAGA for minimizing finite sum of smooth nonconvex functions, despite the additional outer composition and the stochastic composite gradients being biased in our case.
APA
Zhang, J. & Xiao, L.. (2019). A Composite Randomized Incremental Gradient Method. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7454-7462 Available from https://proceedings.mlr.press/v97/zhang19n.html.

Related Material