Asynchronous Algorithmic Alignment With Cocycles

Andrew Joseph Dudzik, Tamara von Glehn, Razvan Pascanu, Petar Veličković
Proceedings of the Second Learning on Graphs Conference, PMLR 231:3:1-3:17, 2024.

Abstract

State-of-the-art neural algorithmic reasoners make use of message passing in graph neural networks (GNNs). But typical GNNs blur the distinction between the definition and invocation of the message function, forcing a node to send messages to its neighbours at every layer, synchronously. When applying GNNs to learn to execute dynamic programming algorithms, however, on most steps only a handful of the nodes would have meaningful updates to send. One, hence, runs the risk of inefficiencies by sending too much irrelevant data across the graph. But more importantly, many intermediate GNN steps have to learn the identity functions, which is a non-trivial learning problem. In this work, we explicitly separate the concepts of node state update and message function invocation. With this separation, we obtain a mathematical formulation that allows us to reason about asynchronous computation in both algorithms and neural networks. Our analysis yields several practical implementations of synchronous scalable GNN layers that are provably invariant under various forms of asynchrony.

Cite this Paper


BibTeX
@InProceedings{pmlr-v231-dudzik24a, title = {Asynchronous Algorithmic Alignment With Cocycles}, author = {Dudzik, Andrew Joseph and von Glehn, Tamara and Pascanu, Razvan and Veli{\v c}kovi{\' c}, Petar}, booktitle = {Proceedings of the Second Learning on Graphs Conference}, pages = {3:1--3:17}, year = {2024}, editor = {Villar, Soledad and Chamberlain, Benjamin}, volume = {231}, series = {Proceedings of Machine Learning Research}, month = {27--30 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v231/dudzik24a/dudzik24a.pdf}, url = {https://proceedings.mlr.press/v231/dudzik24a.html}, abstract = {State-of-the-art neural algorithmic reasoners make use of message passing in graph neural networks (GNNs). But typical GNNs blur the distinction between the definition and invocation of the message function, forcing a node to send messages to its neighbours at every layer, synchronously. When applying GNNs to learn to execute dynamic programming algorithms, however, on most steps only a handful of the nodes would have meaningful updates to send. One, hence, runs the risk of inefficiencies by sending too much irrelevant data across the graph. But more importantly, many intermediate GNN steps have to learn the identity functions, which is a non-trivial learning problem. In this work, we explicitly separate the concepts of node state update and message function invocation. With this separation, we obtain a mathematical formulation that allows us to reason about asynchronous computation in both algorithms and neural networks. Our analysis yields several practical implementations of synchronous scalable GNN layers that are provably invariant under various forms of asynchrony.} }
Endnote
%0 Conference Paper %T Asynchronous Algorithmic Alignment With Cocycles %A Andrew Joseph Dudzik %A Tamara von Glehn %A Razvan Pascanu %A Petar Veličković %B Proceedings of the Second Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2024 %E Soledad Villar %E Benjamin Chamberlain %F pmlr-v231-dudzik24a %I PMLR %P 3:1--3:17 %U https://proceedings.mlr.press/v231/dudzik24a.html %V 231 %X State-of-the-art neural algorithmic reasoners make use of message passing in graph neural networks (GNNs). But typical GNNs blur the distinction between the definition and invocation of the message function, forcing a node to send messages to its neighbours at every layer, synchronously. When applying GNNs to learn to execute dynamic programming algorithms, however, on most steps only a handful of the nodes would have meaningful updates to send. One, hence, runs the risk of inefficiencies by sending too much irrelevant data across the graph. But more importantly, many intermediate GNN steps have to learn the identity functions, which is a non-trivial learning problem. In this work, we explicitly separate the concepts of node state update and message function invocation. With this separation, we obtain a mathematical formulation that allows us to reason about asynchronous computation in both algorithms and neural networks. Our analysis yields several practical implementations of synchronous scalable GNN layers that are provably invariant under various forms of asynchrony.
APA
Dudzik, A.J., von Glehn, T., Pascanu, R. & Veličković, P.. (2024). Asynchronous Algorithmic Alignment With Cocycles. Proceedings of the Second Learning on Graphs Conference, in Proceedings of Machine Learning Research 231:3:1-3:17 Available from https://proceedings.mlr.press/v231/dudzik24a.html.

Related Material