Can strong structural encoding reduce the importance of Message Passing?

Floor Eijkelboom, Erik J Bekkers, Michael M. Bronstein, Francesco Di Giovanni
Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), PMLR 221:278-288, 2023.

Abstract

The most prevalent class of neural networks operating on graphs are message passing neural networks (MPNNs), in which the representation of a node is updated iteratively by aggregating information in the 1-hop neighborhood. Since this paradigm for computing node embeddings may prevent the model from learning coarse topological structures, the initial features are often augmented with structural information of the graph, typically in the form of Laplacian eigenvectors or Random Walk transition probabilities. In this work, we explore the contribution of message passing when strong structural encodings are provided. We introduce a novel way of modeling the interaction between feature and structural information based on their tensor product rather than the standard concatenation. The choice of interaction is compared in common scenarios and in settings where the capacity of the message-passing layer is severely reduced and ultimately the message-passing phase is removed altogether. Our results indicate that using tensor-based encodings is always at least on par with the concatenation-based encoding and that it makes the model much more robust when the message passing layers are removed, on some tasks incurring almost no drop in performance. This suggests that the importance of message passing is limited when the model can construct strong structural encodings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v221-eijkelboom23a, title = {Can strong structural encoding reduce the importance of Message Passing?}, author = {Eijkelboom, Floor and Bekkers, Erik J and Bronstein, Michael M. and Giovanni, Francesco Di}, booktitle = {Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)}, pages = {278--288}, year = {2023}, editor = {Doster, Timothy and Emerson, Tegan and Kvinge, Henry and Miolane, Nina and Papillon, Mathilde and Rieck, Bastian and Sanborn, Sophia}, volume = {221}, series = {Proceedings of Machine Learning Research}, month = {28 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v221/eijkelboom23a/eijkelboom23a.pdf}, url = {https://proceedings.mlr.press/v221/eijkelboom23a.html}, abstract = {The most prevalent class of neural networks operating on graphs are message passing neural networks (MPNNs), in which the representation of a node is updated iteratively by aggregating information in the 1-hop neighborhood. Since this paradigm for computing node embeddings may prevent the model from learning coarse topological structures, the initial features are often augmented with structural information of the graph, typically in the form of Laplacian eigenvectors or Random Walk transition probabilities. In this work, we explore the contribution of message passing when strong structural encodings are provided. We introduce a novel way of modeling the interaction between feature and structural information based on their tensor product rather than the standard concatenation. The choice of interaction is compared in common scenarios and in settings where the capacity of the message-passing layer is severely reduced and ultimately the message-passing phase is removed altogether. Our results indicate that using tensor-based encodings is always at least on par with the concatenation-based encoding and that it makes the model much more robust when the message passing layers are removed, on some tasks incurring almost no drop in performance. This suggests that the importance of message passing is limited when the model can construct strong structural encodings.} }
Endnote
%0 Conference Paper %T Can strong structural encoding reduce the importance of Message Passing? %A Floor Eijkelboom %A Erik J Bekkers %A Michael M. Bronstein %A Francesco Di Giovanni %B Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML) %C Proceedings of Machine Learning Research %D 2023 %E Timothy Doster %E Tegan Emerson %E Henry Kvinge %E Nina Miolane %E Mathilde Papillon %E Bastian Rieck %E Sophia Sanborn %F pmlr-v221-eijkelboom23a %I PMLR %P 278--288 %U https://proceedings.mlr.press/v221/eijkelboom23a.html %V 221 %X The most prevalent class of neural networks operating on graphs are message passing neural networks (MPNNs), in which the representation of a node is updated iteratively by aggregating information in the 1-hop neighborhood. Since this paradigm for computing node embeddings may prevent the model from learning coarse topological structures, the initial features are often augmented with structural information of the graph, typically in the form of Laplacian eigenvectors or Random Walk transition probabilities. In this work, we explore the contribution of message passing when strong structural encodings are provided. We introduce a novel way of modeling the interaction between feature and structural information based on their tensor product rather than the standard concatenation. The choice of interaction is compared in common scenarios and in settings where the capacity of the message-passing layer is severely reduced and ultimately the message-passing phase is removed altogether. Our results indicate that using tensor-based encodings is always at least on par with the concatenation-based encoding and that it makes the model much more robust when the message passing layers are removed, on some tasks incurring almost no drop in performance. This suggests that the importance of message passing is limited when the model can construct strong structural encodings.
APA
Eijkelboom, F., Bekkers, E.J., Bronstein, M.M. & Giovanni, F.D.. (2023). Can strong structural encoding reduce the importance of Message Passing?. Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), in Proceedings of Machine Learning Research 221:278-288 Available from https://proceedings.mlr.press/v221/eijkelboom23a.html.

Related Material