Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization

Xiaotong Yuan, Ping Li, Tong Zhang
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):127-135, 2014.

Abstract

Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantees and impressive numerical performance. In this paper, we generalize HTP from compressed sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard truncation step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods when applied to learning tasks of sparse logistic regression and sparse support vector machines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-yuan14, title = {Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization}, author = {Yuan, Xiaotong and Li, Ping and Zhang, Tong}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {127--135}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/yuan14.pdf}, url = {https://proceedings.mlr.press/v32/yuan14.html}, abstract = {Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantees and impressive numerical performance. In this paper, we generalize HTP from compressed sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard truncation step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods when applied to learning tasks of sparse logistic regression and sparse support vector machines.} }
Endnote
%0 Conference Paper %T Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization %A Xiaotong Yuan %A Ping Li %A Tong Zhang %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-yuan14 %I PMLR %P 127--135 %U https://proceedings.mlr.press/v32/yuan14.html %V 32 %N 2 %X Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantees and impressive numerical performance. In this paper, we generalize HTP from compressed sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard truncation step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods when applied to learning tasks of sparse logistic regression and sparse support vector machines.
RIS
TY - CPAPER TI - Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization AU - Xiaotong Yuan AU - Ping Li AU - Tong Zhang BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-yuan14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 127 EP - 135 L1 - http://proceedings.mlr.press/v32/yuan14.pdf UR - https://proceedings.mlr.press/v32/yuan14.html AB - Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantees and impressive numerical performance. In this paper, we generalize HTP from compressed sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard truncation step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods when applied to learning tasks of sparse logistic regression and sparse support vector machines. ER -
APA
Yuan, X., Li, P. & Zhang, T.. (2014). Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):127-135 Available from https://proceedings.mlr.press/v32/yuan14.html.

Related Material