Open Problem: Running time complexity of accelerated $\ell_1$-regularized PageRank

Kimon Fountoulakis, Shenghao Yang
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5630-5632, 2022.

Abstract

The results in Google Search, Twitter and other popular search engines traditionally utilize the Personalized PageRank (PPR) vector to rank the results in their search engines. Additionally, there is a plethora of applications beyond the web~\citep{G15} which are modelled using PPR. In recent work by~\cite{ACL06,GM14_ICML,fountoulakis2019variational}, it was shown that small probabilities in PPR vector, e.g., web pages beyond the first page in Google Search, can be thresholded out automatically by utilizing $\ell_1$-regularization or equivalently by early termination. Both versions result in approximate computation of PPR. The current fastest method for computing the $\ell_1$-regularized PPR uses proximal gradient method and requires $\tilde{\mathcal{O}}((\alpha \rho)^{-1})$ total running time, where $\alpha$ is the teleportation parameter and $\rho$ is a parameter which controls the level of sparsity in the $\ell_1$-regularized PPR. It is important to note that the running time complexity does not depend on the size of the underlying graph (e.g. the length of the PPR vector). Such property has become a prerequisite to probe modern large scale networks. A seemingly natural way to build an even faster algorithm for computing the $\ell_1$-regularized PPR is to accelerate the proximal gradient method and consequently reduce the running time complexity to $\tilde{\mathcal{O}}((\sqrt{\alpha} \rho)^{-1})$. This will lead to a speed-up by a factor of $1/\sqrt{\alpha}$ and improve the running time of various network analytic methods which build upon PPR. However, the original analysis of the proximal gradient method in~\cite{fountoulakis2019variational} does not apply to the accelerated version. While we have empirical evidence that indicates accelerated proximal gradient requires less total running time, it is not even clear if acceleration could lead to a worse running time complexity in the worst case.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-open-problem-fountoulakis22a, title = {Open Problem: Running time complexity of accelerated $\ell_1$-regularized PageRank}, author = {Fountoulakis, Kimon and Yang, Shenghao}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {5630--5632}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/open-problem-fountoulakis22a/open-problem-fountoulakis22a.pdf}, url = {https://proceedings.mlr.press/v178/open-problem-fountoulakis22a.html}, abstract = {The results in Google Search, Twitter and other popular search engines traditionally utilize the Personalized PageRank (PPR) vector to rank the results in their search engines. Additionally, there is a plethora of applications beyond the web~\citep{G15} which are modelled using PPR. In recent work by~\cite{ACL06,GM14_ICML,fountoulakis2019variational}, it was shown that small probabilities in PPR vector, e.g., web pages beyond the first page in Google Search, can be thresholded out automatically by utilizing $\ell_1$-regularization or equivalently by early termination. Both versions result in approximate computation of PPR. The current fastest method for computing the $\ell_1$-regularized PPR uses proximal gradient method and requires $\tilde{\mathcal{O}}((\alpha \rho)^{-1})$ total running time, where $\alpha$ is the teleportation parameter and $\rho$ is a parameter which controls the level of sparsity in the $\ell_1$-regularized PPR. It is important to note that the running time complexity does not depend on the size of the underlying graph (e.g. the length of the PPR vector). Such property has become a prerequisite to probe modern large scale networks. A seemingly natural way to build an even faster algorithm for computing the $\ell_1$-regularized PPR is to accelerate the proximal gradient method and consequently reduce the running time complexity to $\tilde{\mathcal{O}}((\sqrt{\alpha} \rho)^{-1})$. This will lead to a speed-up by a factor of $1/\sqrt{\alpha}$ and improve the running time of various network analytic methods which build upon PPR. However, the original analysis of the proximal gradient method in~\cite{fountoulakis2019variational} does not apply to the accelerated version. While we have empirical evidence that indicates accelerated proximal gradient requires less total running time, it is not even clear if acceleration could lead to a worse running time complexity in the worst case.} }
Endnote
%0 Conference Paper %T Open Problem: Running time complexity of accelerated $\ell_1$-regularized PageRank %A Kimon Fountoulakis %A Shenghao Yang %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-open-problem-fountoulakis22a %I PMLR %P 5630--5632 %U https://proceedings.mlr.press/v178/open-problem-fountoulakis22a.html %V 178 %X The results in Google Search, Twitter and other popular search engines traditionally utilize the Personalized PageRank (PPR) vector to rank the results in their search engines. Additionally, there is a plethora of applications beyond the web~\citep{G15} which are modelled using PPR. In recent work by~\cite{ACL06,GM14_ICML,fountoulakis2019variational}, it was shown that small probabilities in PPR vector, e.g., web pages beyond the first page in Google Search, can be thresholded out automatically by utilizing $\ell_1$-regularization or equivalently by early termination. Both versions result in approximate computation of PPR. The current fastest method for computing the $\ell_1$-regularized PPR uses proximal gradient method and requires $\tilde{\mathcal{O}}((\alpha \rho)^{-1})$ total running time, where $\alpha$ is the teleportation parameter and $\rho$ is a parameter which controls the level of sparsity in the $\ell_1$-regularized PPR. It is important to note that the running time complexity does not depend on the size of the underlying graph (e.g. the length of the PPR vector). Such property has become a prerequisite to probe modern large scale networks. A seemingly natural way to build an even faster algorithm for computing the $\ell_1$-regularized PPR is to accelerate the proximal gradient method and consequently reduce the running time complexity to $\tilde{\mathcal{O}}((\sqrt{\alpha} \rho)^{-1})$. This will lead to a speed-up by a factor of $1/\sqrt{\alpha}$ and improve the running time of various network analytic methods which build upon PPR. However, the original analysis of the proximal gradient method in~\cite{fountoulakis2019variational} does not apply to the accelerated version. While we have empirical evidence that indicates accelerated proximal gradient requires less total running time, it is not even clear if acceleration could lead to a worse running time complexity in the worst case.
APA
Fountoulakis, K. & Yang, S.. (2022). Open Problem: Running time complexity of accelerated $\ell_1$-regularized PageRank. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:5630-5632 Available from https://proceedings.mlr.press/v178/open-problem-fountoulakis22a.html.

Related Material