Improved Dynamic Graph Learning through FaultTolerant Sparsification
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:76247633, 2019.
Abstract
Graph sparsification has been used to improve the computational cost of learning over graphs, e.g., Laplacianregularized estimation and graph semisupervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely faulttolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacianregularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cutbased graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.
Related Material


