Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Yichen Chen, Dongdong Ge, Mengdi Wang, Zizhuo Wang, Yinyu Ye, Hao Yin
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:740-747, 2017.

Abstract

Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for $n$ data points (each of dimension $d$) and a nonconvex sparsity penalty. We prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2<1$. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P $=$ NP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-chen17d, title = {Strong {NP}-Hardness for Sparse Optimization with Concave Penalty Functions}, author = {Yichen Chen and Dongdong Ge and Mengdi Wang and Zizhuo Wang and Yinyu Ye and Hao Yin}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {740--747}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/chen17d/chen17d.pdf}, url = {https://proceedings.mlr.press/v70/chen17d.html}, abstract = {Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for $n$ data points (each of dimension $d$) and a nonconvex sparsity penalty. We prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2<1$. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P $=$ NP.} }
Endnote
%0 Conference Paper %T Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions %A Yichen Chen %A Dongdong Ge %A Mengdi Wang %A Zizhuo Wang %A Yinyu Ye %A Hao Yin %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-chen17d %I PMLR %P 740--747 %U https://proceedings.mlr.press/v70/chen17d.html %V 70 %X Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for $n$ data points (each of dimension $d$) and a nonconvex sparsity penalty. We prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2<1$. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P $=$ NP.
APA
Chen, Y., Ge, D., Wang, M., Wang, Z., Ye, Y. & Yin, H.. (2017). Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:740-747 Available from https://proceedings.mlr.press/v70/chen17d.html.

Related Material