[edit]
Faster Noisy Power Method
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:1138-1164, 2022.
Abstract
Given the capability to handle diverse resource constraints, such as communication, memory, or privacy, the noisy power method, as a meta algorithm for computing the dominant eigenspace of a matrix, has found wide applications in data analysis and statistics (e.g., PCA). For an input data matrix, the performance of the algorithm, as with the noiseless case, is characterized by the spectral gap, which largely dictates the convergence rate and affects the noise tolerance level as well. A recent analysis improved the dependency over the consecutive spectral gap $(\lambda_{k}-\lambda_{k+1})$ to the dependency over $(\lambda_{k}-\lambda_{q+1})$, where $q$ could be much greater than the target rank $k$ and thus result in better performance by a significantly larger gap. However, $(\lambda_{k}-\lambda_{q+1})$ could still be quite small and potentially limit the applicability. In this paper, we further improve the dependency of the convergence rate over $O(\lambda_{k}-\lambda_{q+1})$ to dependency over $\tilde{O}(\sqrt{\lambda_{k}-\lambda_{q+1}})$ in a certain regime of a new parameter, for a faster noise-tolerant algorithm. To achieve this goal, we propose faster noisy power method which introduces the momentum acceleration into the noisy power iteration, and present a novel analysis that differs from previous ones. We also extend our algorithm to the distributed PCA and memory-efficient streaming PCA and get improved results accordingly in terms of the gap dependence.