Robust Graph Representation Learning via Neural Sparsification

Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, Wei Wang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11458-11468, 2020.

Abstract

Graph representation learning serves as the core of important prediction tasks, ranging from product recommendation to fraud detection. Real-life graphs usually have complex information in the local neighborhood, where each node is described by a rich set of features and connects to dozens or even hundreds of neighbors. Despite the success of neighborhood aggregation in graph neural networks, task-irrelevant information is mixed into nodes’ neighborhood, making learned models suffer from sub-optimal generalization performance. In this paper, we present NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs. Our method takes both structural and non-structural information as input, utilizes deep neural networks to parameterize sparsification processes, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance. Experimental results on both benchmark and private datasets show that NeuralSparse can yield up to 7.2% improvement in testing accuracy when working with existing graph neural networks on node classification tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zheng20d, title = {Robust Graph Representation Learning via Neural Sparsification}, author = {Zheng, Cheng and Zong, Bo and Cheng, Wei and Song, Dongjin and Ni, Jingchao and Yu, Wenchao and Chen, Haifeng and Wang, Wei}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11458--11468}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zheng20d/zheng20d.pdf}, url = {https://proceedings.mlr.press/v119/zheng20d.html}, abstract = {Graph representation learning serves as the core of important prediction tasks, ranging from product recommendation to fraud detection. Real-life graphs usually have complex information in the local neighborhood, where each node is described by a rich set of features and connects to dozens or even hundreds of neighbors. Despite the success of neighborhood aggregation in graph neural networks, task-irrelevant information is mixed into nodes’ neighborhood, making learned models suffer from sub-optimal generalization performance. In this paper, we present NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs. Our method takes both structural and non-structural information as input, utilizes deep neural networks to parameterize sparsification processes, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance. Experimental results on both benchmark and private datasets show that NeuralSparse can yield up to 7.2% improvement in testing accuracy when working with existing graph neural networks on node classification tasks.} }
Endnote
%0 Conference Paper %T Robust Graph Representation Learning via Neural Sparsification %A Cheng Zheng %A Bo Zong %A Wei Cheng %A Dongjin Song %A Jingchao Ni %A Wenchao Yu %A Haifeng Chen %A Wei Wang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zheng20d %I PMLR %P 11458--11468 %U https://proceedings.mlr.press/v119/zheng20d.html %V 119 %X Graph representation learning serves as the core of important prediction tasks, ranging from product recommendation to fraud detection. Real-life graphs usually have complex information in the local neighborhood, where each node is described by a rich set of features and connects to dozens or even hundreds of neighbors. Despite the success of neighborhood aggregation in graph neural networks, task-irrelevant information is mixed into nodes’ neighborhood, making learned models suffer from sub-optimal generalization performance. In this paper, we present NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs. Our method takes both structural and non-structural information as input, utilizes deep neural networks to parameterize sparsification processes, and optimizes the parameters by feedback signals from downstream tasks. Under the NeuralSparse framework, supervised graph sparsification could seamlessly connect with existing graph neural networks for more robust performance. Experimental results on both benchmark and private datasets show that NeuralSparse can yield up to 7.2% improvement in testing accuracy when working with existing graph neural networks on node classification tasks.
APA
Zheng, C., Zong, B., Cheng, W., Song, D., Ni, J., Yu, W., Chen, H. & Wang, W.. (2020). Robust Graph Representation Learning via Neural Sparsification. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11458-11468 Available from https://proceedings.mlr.press/v119/zheng20d.html.

Related Material