Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction

Jinshan Zeng, Ke Ma, Yuan Yao
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:199-207, 2018.

Abstract

There is a recent surge of interest in nonconvex reformulations via low-rank factorization for stochastic convex semidefinite optimization problem in the purpose of efficiency and scalability. Compared with the original convex formulations, the nonconvex ones typically involve much fewer variables, allowing them to scale to scenarios with millions of variables. However, it opens a new challenge that under what conditions the nonconvex stochastic algorithms may find the global optima effectively despite their empirical success in applications. In this paper, we provide an answer that a stochastic gradient descent method with variance reduction, can be adapted to solve the nonconvex reformulation of the original convex problem, with a global linear convergence, i.e., converging to a global optimum exponentially fast, at a proper initial choice in the restricted strongly convex case. Experimental studies on both simulation and real-world applications on ordinal embedding are provided to show the effectiveness of the proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-zeng18a, title = {Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction}, author = {Zeng, Jinshan and Ma, Ke and Yao, Yuan}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {199--207}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/zeng18a/zeng18a.pdf}, url = {https://proceedings.mlr.press/v84/zeng18a.html}, abstract = {There is a recent surge of interest in nonconvex reformulations via low-rank factorization for stochastic convex semidefinite optimization problem in the purpose of efficiency and scalability. Compared with the original convex formulations, the nonconvex ones typically involve much fewer variables, allowing them to scale to scenarios with millions of variables. However, it opens a new challenge that under what conditions the nonconvex stochastic algorithms may find the global optima effectively despite their empirical success in applications. In this paper, we provide an answer that a stochastic gradient descent method with variance reduction, can be adapted to solve the nonconvex reformulation of the original convex problem, with a global linear convergence, i.e., converging to a global optimum exponentially fast, at a proper initial choice in the restricted strongly convex case. Experimental studies on both simulation and real-world applications on ordinal embedding are provided to show the effectiveness of the proposed algorithms.} }
Endnote
%0 Conference Paper %T Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction %A Jinshan Zeng %A Ke Ma %A Yuan Yao %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-zeng18a %I PMLR %P 199--207 %U https://proceedings.mlr.press/v84/zeng18a.html %V 84 %X There is a recent surge of interest in nonconvex reformulations via low-rank factorization for stochastic convex semidefinite optimization problem in the purpose of efficiency and scalability. Compared with the original convex formulations, the nonconvex ones typically involve much fewer variables, allowing them to scale to scenarios with millions of variables. However, it opens a new challenge that under what conditions the nonconvex stochastic algorithms may find the global optima effectively despite their empirical success in applications. In this paper, we provide an answer that a stochastic gradient descent method with variance reduction, can be adapted to solve the nonconvex reformulation of the original convex problem, with a global linear convergence, i.e., converging to a global optimum exponentially fast, at a proper initial choice in the restricted strongly convex case. Experimental studies on both simulation and real-world applications on ordinal embedding are provided to show the effectiveness of the proposed algorithms.
APA
Zeng, J., Ma, K. & Yao, Y.. (2018). Finding Global Optima in Nonconvex Stochastic Semidefinite Optimization with Variance Reduction. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:199-207 Available from https://proceedings.mlr.press/v84/zeng18a.html.

Related Material