Improved Dynamic Graph Learning through Fault-Tolerant Sparsification

Chunjiang Zhu, Sabine Storandt, Kam-Yiu Lam, Song Han, Jinbo Bi
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7624-7633, 2019.

Abstract

Graph sparsification has been used to improve the computational cost of learning over graphs, e.g., Laplacian-regularized estimation and graph semi-supervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhu19b, title = {Improved Dynamic Graph Learning through Fault-Tolerant Sparsification}, author = {Zhu, Chunjiang and Storandt, Sabine and Lam, Kam-Yiu and Han, Song and Bi, Jinbo}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7624--7633}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhu19b/zhu19b.pdf}, url = {https://proceedings.mlr.press/v97/zhu19b.html}, abstract = {Graph sparsification has been used to improve the computational cost of learning over graphs, e.g., Laplacian-regularized estimation and graph semi-supervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.} }
Endnote
%0 Conference Paper %T Improved Dynamic Graph Learning through Fault-Tolerant Sparsification %A Chunjiang Zhu %A Sabine Storandt %A Kam-Yiu Lam %A Song Han %A Jinbo Bi %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhu19b %I PMLR %P 7624--7633 %U https://proceedings.mlr.press/v97/zhu19b.html %V 97 %X Graph sparsification has been used to improve the computational cost of learning over graphs, e.g., Laplacian-regularized estimation and graph semi-supervised learning (SSL). However, when graphs vary over time, repeated sparsification requires polynomial order computational cost per update. We propose a new type of graph sparsification namely fault-tolerant (FT) sparsification to significantly reduce the cost to only a constant. Then the computational cost of subsequent graph learning tasks can be significantly improved with limited loss in their accuracy. In particular, we give theoretical analyze to upper bound the loss in the accuracy of the subsequent Laplacian-regularized estimation and graph SSL, due to the FT sparsification. In addition, FT spectral sparsification can be generalized to FT cut sparsification, for cut-based graph learning. Extensive experiments have confirmed the computational efficiencies and accuracies of the proposed methods for learning on dynamic graphs.
APA
Zhu, C., Storandt, S., Lam, K., Han, S. & Bi, J.. (2019). Improved Dynamic Graph Learning through Fault-Tolerant Sparsification. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7624-7633 Available from https://proceedings.mlr.press/v97/zhu19b.html.

Related Material