Improved large-scale graph learning through ridge spectral sparsification

Daniele Calandriello, Alessandro Lazaric, Ioannis Koutis, Michal Valko
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:688-697, 2018.

Abstract

The representation and learning benefits of methods based on graph Laplacians, such as Laplacian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. Nonetheless, the exact versions of these methods scale poorly with the number of nodes $n$ of the graph. In this paper, we combine a spectral sparsification routine with Laplacian learning. Given a graph $G$ as input, our algorithm computes a sparsifier in a distributed way in $O(n\log^3(n))$ time, $O(m\log^3(n))$ work and $O(n\log(n))$ memory, using only $\log(n)$ rounds of communication. Furthermore, motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrally-similar graph, we are able to bound the error induced by the sparsification for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon co-purchase graph and compare to the state-of-the-art heuristics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-calandriello18a, title = {Improved large-scale graph learning through ridge spectral sparsification}, author = {Calandriello, Daniele and Lazaric, Alessandro and Koutis, Ioannis and Valko, Michal}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {688--697}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/calandriello18a/calandriello18a.pdf}, url = {https://proceedings.mlr.press/v80/calandriello18a.html}, abstract = {The representation and learning benefits of methods based on graph Laplacians, such as Laplacian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. Nonetheless, the exact versions of these methods scale poorly with the number of nodes $n$ of the graph. In this paper, we combine a spectral sparsification routine with Laplacian learning. Given a graph $G$ as input, our algorithm computes a sparsifier in a distributed way in $O(n\log^3(n))$ time, $O(m\log^3(n))$ work and $O(n\log(n))$ memory, using only $\log(n)$ rounds of communication. Furthermore, motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrally-similar graph, we are able to bound the error induced by the sparsification for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon co-purchase graph and compare to the state-of-the-art heuristics.} }
Endnote
%0 Conference Paper %T Improved large-scale graph learning through ridge spectral sparsification %A Daniele Calandriello %A Alessandro Lazaric %A Ioannis Koutis %A Michal Valko %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-calandriello18a %I PMLR %P 688--697 %U https://proceedings.mlr.press/v80/calandriello18a.html %V 80 %X The representation and learning benefits of methods based on graph Laplacians, such as Laplacian smoothing or harmonic function solution for semi-supervised learning (SSL), are empirically and theoretically well supported. Nonetheless, the exact versions of these methods scale poorly with the number of nodes $n$ of the graph. In this paper, we combine a spectral sparsification routine with Laplacian learning. Given a graph $G$ as input, our algorithm computes a sparsifier in a distributed way in $O(n\log^3(n))$ time, $O(m\log^3(n))$ work and $O(n\log(n))$ memory, using only $\log(n)$ rounds of communication. Furthermore, motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrally-similar graph, we are able to bound the error induced by the sparsification for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon co-purchase graph and compare to the state-of-the-art heuristics.
APA
Calandriello, D., Lazaric, A., Koutis, I. & Valko, M.. (2018). Improved large-scale graph learning through ridge spectral sparsification. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:688-697 Available from https://proceedings.mlr.press/v80/calandriello18a.html.

Related Material