Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
[edit]
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:116125, 2017.
Abstract
The online problem of computing the top eigenvector is fundamental to machine learning. The famous matrixmultiplicativeweightupdate (MMWU) framework solves this online problem and gives optimal regret. However, since MMWU runs very slow due to the computation of matrix exponentials, researchers proposed the followtheperturbedleader (FTPL) framework which is faster, but a factor $\sqrt{d}$ worse than the optimal regret for dimension$d$ matrices. We propose a followthecompressedleader framework which, not only matches the optimal regret of MMWU (up to polylog factors), but runs no slower than FTPL. Our main idea is to “compress” the MMWU strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. This resolves an open question regarding how to obtain both (nearly) optimal and efficient algorithms for the online eigenvector problem.
Related Material


