PatchGT: Transformer Over Non-Trainable Clusters for Learning Graph Representations

Han Gao, Xu Han, Jiaoyang Huang, Jian-Xun Wang, Liping Liu
Proceedings of the First Learning on Graphs Conference, PMLR 198:27:1-27:25, 2022.

Abstract

Recently the Transformer structure has shown good performances in graph learning tasks. However, these Transformer models directly work on graph nodes and may have difficulties learning high-level information. Inspired by the vision transformer, which applies to image patches, we propose a new Transformer-based graph neural network: Patch Graph Transformer (PatchGT). Unlike previous transformer-based models for learning graph representations, PatchGT learns from non-trainable graph patches, not from nodes directly. It can help save computation and improve the model performance. The key idea is to segment a graph into patches based on spectral clustering without any trainable parameters, with which the model can first use GNN layers to learn patch-level representations and then use Transformer layers to obtain graph-level representations. The architecture leverages the spectral information of graphs and combines the strengths of GNNs and Transformers. Further, We show the limitations of previous hierarchical trainable clusters theoretically and empirically. We also prove the proposed non-trainable spectral clustering method is permutation invariant and can help address the information bottlenecks in the graph. PatchGT achieves higher expressiveness than 1-WL-type GNNs, and the empirical study shows that PatchGT achieves competitive performances on benchmark datasets and provides interpretability to its predictions. The implementation of our algorithm is released at our GitHub repo: https://github.com/tufts-ml/PatchGT.

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-gao22a, title = {PatchGT: Transformer Over Non-Trainable Clusters for Learning Graph Representations}, author = {Gao, Han and Han, Xu and Huang, Jiaoyang and Wang, Jian-Xun and Liu, Liping}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {27:1--27:25}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/gao22a/gao22a.pdf}, url = {https://proceedings.mlr.press/v198/gao22a.html}, abstract = {Recently the Transformer structure has shown good performances in graph learning tasks. However, these Transformer models directly work on graph nodes and may have difficulties learning high-level information. Inspired by the vision transformer, which applies to image patches, we propose a new Transformer-based graph neural network: Patch Graph Transformer (PatchGT). Unlike previous transformer-based models for learning graph representations, PatchGT learns from non-trainable graph patches, not from nodes directly. It can help save computation and improve the model performance. The key idea is to segment a graph into patches based on spectral clustering without any trainable parameters, with which the model can first use GNN layers to learn patch-level representations and then use Transformer layers to obtain graph-level representations. The architecture leverages the spectral information of graphs and combines the strengths of GNNs and Transformers. Further, We show the limitations of previous hierarchical trainable clusters theoretically and empirically. We also prove the proposed non-trainable spectral clustering method is permutation invariant and can help address the information bottlenecks in the graph. PatchGT achieves higher expressiveness than 1-WL-type GNNs, and the empirical study shows that PatchGT achieves competitive performances on benchmark datasets and provides interpretability to its predictions. The implementation of our algorithm is released at our GitHub repo: https://github.com/tufts-ml/PatchGT.} }
Endnote
%0 Conference Paper %T PatchGT: Transformer Over Non-Trainable Clusters for Learning Graph Representations %A Han Gao %A Xu Han %A Jiaoyang Huang %A Jian-Xun Wang %A Liping Liu %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-gao22a %I PMLR %P 27:1--27:25 %U https://proceedings.mlr.press/v198/gao22a.html %V 198 %X Recently the Transformer structure has shown good performances in graph learning tasks. However, these Transformer models directly work on graph nodes and may have difficulties learning high-level information. Inspired by the vision transformer, which applies to image patches, we propose a new Transformer-based graph neural network: Patch Graph Transformer (PatchGT). Unlike previous transformer-based models for learning graph representations, PatchGT learns from non-trainable graph patches, not from nodes directly. It can help save computation and improve the model performance. The key idea is to segment a graph into patches based on spectral clustering without any trainable parameters, with which the model can first use GNN layers to learn patch-level representations and then use Transformer layers to obtain graph-level representations. The architecture leverages the spectral information of graphs and combines the strengths of GNNs and Transformers. Further, We show the limitations of previous hierarchical trainable clusters theoretically and empirically. We also prove the proposed non-trainable spectral clustering method is permutation invariant and can help address the information bottlenecks in the graph. PatchGT achieves higher expressiveness than 1-WL-type GNNs, and the empirical study shows that PatchGT achieves competitive performances on benchmark datasets and provides interpretability to its predictions. The implementation of our algorithm is released at our GitHub repo: https://github.com/tufts-ml/PatchGT.
APA
Gao, H., Han, X., Huang, J., Wang, J. & Liu, L.. (2022). PatchGT: Transformer Over Non-Trainable Clusters for Learning Graph Representations. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:27:1-27:25 Available from https://proceedings.mlr.press/v198/gao22a.html.

Related Material