[edit]
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2626-2634, 2016.
Abstract
We give faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector. Given an explicit matrix A∈Rn×d, we show how to compute an ϵ-approximate top eigenvector of ATA in time ˜O([nnz(A)+dsr(A)gap2]⋅log1/ϵ). Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, and gap is the relative eigengap. We also consider an online setting in which, given a stream of i.i.d. samples from a distribution D with covariance matrix Σ and a vector x0 which is an O(gap) approximate top eigenvector for Σ, we show how to refine x0 to an ϵ approximation using O(var(D)gap−ϵ) samples from D. Here var(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x0, we obtain improved sample complexities and runtimes under a variety of assumptions on D. We achieve our results via a robust analysis of the classic shift-and-invert preconditioning method. This technique lets us reduce eigenvector computation to approximately solving a series of linear systems with fast stochastic gradient methods.