HOT: Higher-Order Dynamic Graph Representation Learning With Efficient Transformers

Maciej Besta, Afonso Claudino Catarino, Lukas Gianinazzi, Nils Blach, Piotr Nyczyk, Hubert Niewiadomski, Torsten Hoefler
Proceedings of the Second Learning on Graphs Conference, PMLR 231:15:1-15:20, 2024.

Abstract

Many graph representation learning (GRL) problems are dynamic, with millions of edges added or removed per second. A fundamental workload in this setting is dynamic link prediction: using a history of graph updates to predict whether a given pair of vertices will become connected. Recent schemes for link predic- tion in such dynamic settings employ Transformers, modeling individual graph updates as single tokens. In this work, we propose HOT: a model that enhances this line of works by harnessing higher-order (HO) graph structures; specifically, k-hop neighbors and more general subgraphs containing a given pair of vertices. Harnessing such HO structures by encoding them into the attention matrix of the underlying Transformer results in higher accuracy of link prediction outcomes, but at the expense of increased memory pressure. To alleviate this, we resort to a recent class of schemes that impose hierarchy on the attention matrix, signifi- cantly reducing memory footprint. The final design offers a sweetspot between high accuracy and low memory utilization. HOT outperforms other dynamic GRL schemes, for example achieving 9%, 7%, and 15% higher accuracy than –respectively –DyGFormer, TGN, and GraphMixer, for the MOOC dataset. Our design can be seamlessly extended towards other dynamic GRL workloads.

Cite this Paper


BibTeX
@InProceedings{pmlr-v231-besta24a, title = {HOT: Higher-Order Dynamic Graph Representation Learning With Efficient Transformers}, author = {Besta, Maciej and Catarino, Afonso Claudino and Gianinazzi, Lukas and Blach, Nils and Nyczyk, Piotr and Niewiadomski, Hubert and Hoefler, Torsten}, booktitle = {Proceedings of the Second Learning on Graphs Conference}, pages = {15:1--15:20}, year = {2024}, editor = {Villar, Soledad and Chamberlain, Benjamin}, volume = {231}, series = {Proceedings of Machine Learning Research}, month = {27--30 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v231/besta24a/besta24a.pdf}, url = {https://proceedings.mlr.press/v231/besta24a.html}, abstract = {Many graph representation learning (GRL) problems are dynamic, with millions of edges added or removed per second. A fundamental workload in this setting is dynamic link prediction: using a history of graph updates to predict whether a given pair of vertices will become connected. Recent schemes for link predic- tion in such dynamic settings employ Transformers, modeling individual graph updates as single tokens. In this work, we propose HOT: a model that enhances this line of works by harnessing higher-order (HO) graph structures; specifically, k-hop neighbors and more general subgraphs containing a given pair of vertices. Harnessing such HO structures by encoding them into the attention matrix of the underlying Transformer results in higher accuracy of link prediction outcomes, but at the expense of increased memory pressure. To alleviate this, we resort to a recent class of schemes that impose hierarchy on the attention matrix, signifi- cantly reducing memory footprint. The final design offers a sweetspot between high accuracy and low memory utilization. HOT outperforms other dynamic GRL schemes, for example achieving 9%, 7%, and 15% higher accuracy than –respectively –DyGFormer, TGN, and GraphMixer, for the MOOC dataset. Our design can be seamlessly extended towards other dynamic GRL workloads.} }
Endnote
%0 Conference Paper %T HOT: Higher-Order Dynamic Graph Representation Learning With Efficient Transformers %A Maciej Besta %A Afonso Claudino Catarino %A Lukas Gianinazzi %A Nils Blach %A Piotr Nyczyk %A Hubert Niewiadomski %A Torsten Hoefler %B Proceedings of the Second Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2024 %E Soledad Villar %E Benjamin Chamberlain %F pmlr-v231-besta24a %I PMLR %P 15:1--15:20 %U https://proceedings.mlr.press/v231/besta24a.html %V 231 %X Many graph representation learning (GRL) problems are dynamic, with millions of edges added or removed per second. A fundamental workload in this setting is dynamic link prediction: using a history of graph updates to predict whether a given pair of vertices will become connected. Recent schemes for link predic- tion in such dynamic settings employ Transformers, modeling individual graph updates as single tokens. In this work, we propose HOT: a model that enhances this line of works by harnessing higher-order (HO) graph structures; specifically, k-hop neighbors and more general subgraphs containing a given pair of vertices. Harnessing such HO structures by encoding them into the attention matrix of the underlying Transformer results in higher accuracy of link prediction outcomes, but at the expense of increased memory pressure. To alleviate this, we resort to a recent class of schemes that impose hierarchy on the attention matrix, signifi- cantly reducing memory footprint. The final design offers a sweetspot between high accuracy and low memory utilization. HOT outperforms other dynamic GRL schemes, for example achieving 9%, 7%, and 15% higher accuracy than –respectively –DyGFormer, TGN, and GraphMixer, for the MOOC dataset. Our design can be seamlessly extended towards other dynamic GRL workloads.
APA
Besta, M., Catarino, A.C., Gianinazzi, L., Blach, N., Nyczyk, P., Niewiadomski, H. & Hoefler, T.. (2024). HOT: Higher-Order Dynamic Graph Representation Learning With Efficient Transformers. Proceedings of the Second Learning on Graphs Conference, in Proceedings of Machine Learning Research 231:15:1-15:20 Available from https://proceedings.mlr.press/v231/besta24a.html.

Related Material