Stochastic Training of Graph Convolutional Networks with Variance Reduction

Jianfei Chen, Jun Zhu, Le Song
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:942-950, 2018.

Abstract

Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-chen18p, title = {Stochastic Training of Graph Convolutional Networks with Variance Reduction}, author = {Chen, Jianfei and Zhu, Jun and Song, Le}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {942--950}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/chen18p/chen18p.pdf}, url = {https://proceedings.mlr.press/v80/chen18p.html}, abstract = {Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.} }
Endnote
%0 Conference Paper %T Stochastic Training of Graph Convolutional Networks with Variance Reduction %A Jianfei Chen %A Jun Zhu %A Le Song %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-chen18p %I PMLR %P 942--950 %U https://proceedings.mlr.press/v80/chen18p.html %V 80 %X Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.
APA
Chen, J., Zhu, J. & Song, L.. (2018). Stochastic Training of Graph Convolutional Networks with Variance Reduction. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:942-950 Available from https://proceedings.mlr.press/v80/chen18p.html.

Related Material