Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization

Sudeep Salgia, Qing Zhao, Sattar Vakili
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8427-8437, 2020.

Abstract

A framework based on iterative coordinate minimization (CM) is developed for stochastic convex optimization. Given that exact coordinate minimization is impossible due to the unknown stochastic nature of the objective function, the crux of the proposed optimization algorithm is an optimal control of the minimization precision in each iteration. We establish the optimal precision control and the resulting order-optimal regret performance for strongly convex and separably nonsmooth functions. An interesting finding is that the optimal progression of precision across iterations is independent of the low-dimension CM routine employed, suggesting a general framework for extending low-dimensional optimization routines to high-dimensional problems. The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization. Requiring only a sublinear order of message exchanges, it also lends itself well to distributed computing as compared with the alternative approach of coordinate gradient descent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-salgia20a, title = {Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization}, author = {Salgia, Sudeep and Zhao, Qing and Vakili, Sattar}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8427--8437}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/salgia20a/salgia20a.pdf}, url = {https://proceedings.mlr.press/v119/salgia20a.html}, abstract = {A framework based on iterative coordinate minimization (CM) is developed for stochastic convex optimization. Given that exact coordinate minimization is impossible due to the unknown stochastic nature of the objective function, the crux of the proposed optimization algorithm is an optimal control of the minimization precision in each iteration. We establish the optimal precision control and the resulting order-optimal regret performance for strongly convex and separably nonsmooth functions. An interesting finding is that the optimal progression of precision across iterations is independent of the low-dimension CM routine employed, suggesting a general framework for extending low-dimensional optimization routines to high-dimensional problems. The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization. Requiring only a sublinear order of message exchanges, it also lends itself well to distributed computing as compared with the alternative approach of coordinate gradient descent.} }
Endnote
%0 Conference Paper %T Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization %A Sudeep Salgia %A Qing Zhao %A Sattar Vakili %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-salgia20a %I PMLR %P 8427--8437 %U https://proceedings.mlr.press/v119/salgia20a.html %V 119 %X A framework based on iterative coordinate minimization (CM) is developed for stochastic convex optimization. Given that exact coordinate minimization is impossible due to the unknown stochastic nature of the objective function, the crux of the proposed optimization algorithm is an optimal control of the minimization precision in each iteration. We establish the optimal precision control and the resulting order-optimal regret performance for strongly convex and separably nonsmooth functions. An interesting finding is that the optimal progression of precision across iterations is independent of the low-dimension CM routine employed, suggesting a general framework for extending low-dimensional optimization routines to high-dimensional problems. The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization. Requiring only a sublinear order of message exchanges, it also lends itself well to distributed computing as compared with the alternative approach of coordinate gradient descent.
APA
Salgia, S., Zhao, Q. & Vakili, S.. (2020). Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8427-8437 Available from https://proceedings.mlr.press/v119/salgia20a.html.

Related Material