Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU

Zeyuan Allen-Zhu, Yuanzhi Li
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:116-125, 2017.

Abstract

The online problem of computing the top eigenvector is fundamental to machine learning. The famous matrix-multiplicative-weight-update (MMWU) framework solves this online problem and gives optimal regret. However, since MMWU runs very slow due to the computation of matrix exponentials, researchers proposed the follow-the-perturbed-leader (FTPL) framework which is faster, but a factor $\sqrt{d}$ worse than the optimal regret for dimension-$d$ matrices. We propose a follow-the-compressed-leader framework which, not only matches the optimal regret of MMWU (up to polylog factors), but runs no slower than FTPL. Our main idea is to “compress” the MMWU strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. This resolves an open question regarding how to obtain both (nearly) optimal and efficient algorithms for the online eigenvector problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-allen-zhu17d, title = {Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster {MMWU}}, author = {Zeyuan Allen-Zhu and Yuanzhi Li}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {116--125}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/allen-zhu17d/allen-zhu17d.pdf}, url = {https://proceedings.mlr.press/v70/allen-zhu17d.html}, abstract = {The online problem of computing the top eigenvector is fundamental to machine learning. The famous matrix-multiplicative-weight-update (MMWU) framework solves this online problem and gives optimal regret. However, since MMWU runs very slow due to the computation of matrix exponentials, researchers proposed the follow-the-perturbed-leader (FTPL) framework which is faster, but a factor $\sqrt{d}$ worse than the optimal regret for dimension-$d$ matrices. We propose a follow-the-compressed-leader framework which, not only matches the optimal regret of MMWU (up to polylog factors), but runs no slower than FTPL. Our main idea is to “compress” the MMWU strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. This resolves an open question regarding how to obtain both (nearly) optimal and efficient algorithms for the online eigenvector problem.} }
Endnote
%0 Conference Paper %T Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU %A Zeyuan Allen-Zhu %A Yuanzhi Li %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-allen-zhu17d %I PMLR %P 116--125 %U https://proceedings.mlr.press/v70/allen-zhu17d.html %V 70 %X The online problem of computing the top eigenvector is fundamental to machine learning. The famous matrix-multiplicative-weight-update (MMWU) framework solves this online problem and gives optimal regret. However, since MMWU runs very slow due to the computation of matrix exponentials, researchers proposed the follow-the-perturbed-leader (FTPL) framework which is faster, but a factor $\sqrt{d}$ worse than the optimal regret for dimension-$d$ matrices. We propose a follow-the-compressed-leader framework which, not only matches the optimal regret of MMWU (up to polylog factors), but runs no slower than FTPL. Our main idea is to “compress” the MMWU strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. This resolves an open question regarding how to obtain both (nearly) optimal and efficient algorithms for the online eigenvector problem.
APA
Allen-Zhu, Z. & Li, Y.. (2017). Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:116-125 Available from https://proceedings.mlr.press/v70/allen-zhu17d.html.

Related Material