A Greedy Anytime Algorithm for Sparse PCA

Guy Holtzman, Adam Soffer, Dan Vilenchik
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1939-1956, 2020.

Abstract

The taxing computational effort that is involved in solving some high-dimensional statistical problems, in particular problems involving non-convex optimization, has popularized the development and analysis of algorithms that run efficiently (polynomial-time) but with no general guarantee on statistical consistency. In light of the ever-increasing compute power and decreasing costs, a more useful characterization of algorithms is by their ability to calibrate the invested computational effort with various characteristics of the input at hand and with the available computational resources. We propose a new greedy algorithm for the $\ell_0$-sparse PCA problem which supports the calibration principle. We provide both a rigorous analysis of our algorithm in the spiked covariance model, as well as simulation results and comparison with other existing methods. Our findings show that our algorithm recovers the spike in SNR regimes where all polynomial-time algorithms fail while running in a reasonable parallel-time on a cluster.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-holtzman20a, title = {A Greedy Anytime Algorithm for Sparse PCA}, author = {Holtzman, Guy and Soffer, Adam and Vilenchik, Dan}, pages = {1939--1956}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/holtzman20a/holtzman20a.pdf}, url = {http://proceedings.mlr.press/v125/holtzman20a.html}, abstract = { The taxing computational effort that is involved in solving some high-dimensional statistical problems, in particular problems involving non-convex optimization, has popularized the development and analysis of algorithms that run efficiently (polynomial-time) but with no general guarantee on statistical consistency. In light of the ever-increasing compute power and decreasing costs, a more useful characterization of algorithms is by their ability to calibrate the invested computational effort with various characteristics of the input at hand and with the available computational resources. We propose a new greedy algorithm for the $\ell_0$-sparse PCA problem which supports the calibration principle. We provide both a rigorous analysis of our algorithm in the spiked covariance model, as well as simulation results and comparison with other existing methods. Our findings show that our algorithm recovers the spike in SNR regimes where all polynomial-time algorithms fail while running in a reasonable parallel-time on a cluster.} }
Endnote
%0 Conference Paper %T A Greedy Anytime Algorithm for Sparse PCA %A Guy Holtzman %A Adam Soffer %A Dan Vilenchik %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-holtzman20a %I PMLR %J Proceedings of Machine Learning Research %P 1939--1956 %U http://proceedings.mlr.press %V 125 %W PMLR %X The taxing computational effort that is involved in solving some high-dimensional statistical problems, in particular problems involving non-convex optimization, has popularized the development and analysis of algorithms that run efficiently (polynomial-time) but with no general guarantee on statistical consistency. In light of the ever-increasing compute power and decreasing costs, a more useful characterization of algorithms is by their ability to calibrate the invested computational effort with various characteristics of the input at hand and with the available computational resources. We propose a new greedy algorithm for the $\ell_0$-sparse PCA problem which supports the calibration principle. We provide both a rigorous analysis of our algorithm in the spiked covariance model, as well as simulation results and comparison with other existing methods. Our findings show that our algorithm recovers the spike in SNR regimes where all polynomial-time algorithms fail while running in a reasonable parallel-time on a cluster.
APA
Holtzman, G., Soffer, A. & Vilenchik, D.. (2020). A Greedy Anytime Algorithm for Sparse PCA. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:1939-1956

Related Material