On the Statistical Limits of Convex Relaxations

Zhaoran Wang, Quanquan Gu, Han Liu
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1368-1377, 2016.

Abstract

Many high dimensional sparse learning problems are formulated as nonconvex optimization. A popular approach to solve these nonconvex optimization problems is through convex relaxations such as linear and semidefinite programming. In this paper, we study the statistical limits of convex relaxations. Particularly, we consider two problems: Mean estimation for sparse principal submatrix and edge probability estimation for stochastic block model. We exploit the sum-of-squares relaxation hierarchy to sharply characterize the limits of a broad class of convex relaxations. Our result shows statistical optimality needs to be compromised for achieving computational tractability using convex relaxations. Compared with existing results on computational lower bounds for statistical problems, which consider general polynomial-time algorithms and rely on computational hardness hypotheses on problems like planted clique detection, our theory focuses on a broad class of convex relaxations and does not rely on unproven hypotheses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-wangc16, title = {On the Statistical Limits of Convex Relaxations}, author = {Wang, Zhaoran and Gu, Quanquan and Liu, Han}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1368--1377}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/wangc16.pdf}, url = {https://proceedings.mlr.press/v48/wangc16.html}, abstract = {Many high dimensional sparse learning problems are formulated as nonconvex optimization. A popular approach to solve these nonconvex optimization problems is through convex relaxations such as linear and semidefinite programming. In this paper, we study the statistical limits of convex relaxations. Particularly, we consider two problems: Mean estimation for sparse principal submatrix and edge probability estimation for stochastic block model. We exploit the sum-of-squares relaxation hierarchy to sharply characterize the limits of a broad class of convex relaxations. Our result shows statistical optimality needs to be compromised for achieving computational tractability using convex relaxations. Compared with existing results on computational lower bounds for statistical problems, which consider general polynomial-time algorithms and rely on computational hardness hypotheses on problems like planted clique detection, our theory focuses on a broad class of convex relaxations and does not rely on unproven hypotheses.} }
Endnote
%0 Conference Paper %T On the Statistical Limits of Convex Relaxations %A Zhaoran Wang %A Quanquan Gu %A Han Liu %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-wangc16 %I PMLR %P 1368--1377 %U https://proceedings.mlr.press/v48/wangc16.html %V 48 %X Many high dimensional sparse learning problems are formulated as nonconvex optimization. A popular approach to solve these nonconvex optimization problems is through convex relaxations such as linear and semidefinite programming. In this paper, we study the statistical limits of convex relaxations. Particularly, we consider two problems: Mean estimation for sparse principal submatrix and edge probability estimation for stochastic block model. We exploit the sum-of-squares relaxation hierarchy to sharply characterize the limits of a broad class of convex relaxations. Our result shows statistical optimality needs to be compromised for achieving computational tractability using convex relaxations. Compared with existing results on computational lower bounds for statistical problems, which consider general polynomial-time algorithms and rely on computational hardness hypotheses on problems like planted clique detection, our theory focuses on a broad class of convex relaxations and does not rely on unproven hypotheses.
RIS
TY - CPAPER TI - On the Statistical Limits of Convex Relaxations AU - Zhaoran Wang AU - Quanquan Gu AU - Han Liu BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-wangc16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1368 EP - 1377 L1 - http://proceedings.mlr.press/v48/wangc16.pdf UR - https://proceedings.mlr.press/v48/wangc16.html AB - Many high dimensional sparse learning problems are formulated as nonconvex optimization. A popular approach to solve these nonconvex optimization problems is through convex relaxations such as linear and semidefinite programming. In this paper, we study the statistical limits of convex relaxations. Particularly, we consider two problems: Mean estimation for sparse principal submatrix and edge probability estimation for stochastic block model. We exploit the sum-of-squares relaxation hierarchy to sharply characterize the limits of a broad class of convex relaxations. Our result shows statistical optimality needs to be compromised for achieving computational tractability using convex relaxations. Compared with existing results on computational lower bounds for statistical problems, which consider general polynomial-time algorithms and rely on computational hardness hypotheses on problems like planted clique detection, our theory focuses on a broad class of convex relaxations and does not rely on unproven hypotheses. ER -
APA
Wang, Z., Gu, Q. & Liu, H.. (2016). On the Statistical Limits of Convex Relaxations. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1368-1377 Available from https://proceedings.mlr.press/v48/wangc16.html.

Related Material