Improved largescale graph learning through ridge spectral sparsification
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:688697, 2018.
Abstract
The representation and learning benefits of methods based on graph Laplacians, such as Laplacian smoothing or harmonic function solution for semisupervised learning (SSL), are empirically and theoretically well supported. Nonetheless, the exact versions of these methods scale poorly with the number of nodes $n$ of the graph. In this paper, we combine a spectral sparsification routine with Laplacian learning. Given a graph $G$ as input, our algorithm computes a sparsifier in a distributed way in $O(n\log^3(n))$ time, $O(m\log^3(n))$ work and $O(n\log(n))$ memory, using only $\log(n)$ rounds of communication. Furthermore, motivated by the regularization often employed in learning algorithms, we show that constructing sparsifiers that preserve the spectrum of the Laplacian only up to the regularization level may drastically reduce the size of the final graph. By constructing a spectrallysimilar graph, we are able to bound the error induced by the sparsification for a variety of downstream tasks (e.g., SSL). We empirically validate the theoretical guarantees on Amazon copurchase graph and compare to the stateoftheart heuristics.
Related Material


