Spectral Maps for Learning on Subgraphs

Marco Pegoraro, Riccardo Marin, Arianna Rampini, Simone Melzi, Luca Cosmo, Emanuele Rodolà
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:183-205, 2024.

Abstract

In graph learning, maps between graphs and their subgraphs frequently arise. For instance, when coarsening or rewiring operations are present along the pipeline, one needs to keep track of the corresponding nodes between the original and modified graphs. Classically, these maps are represented as binary node-to-node correspondence matrices, and used as-is to transfer node-wise features between the graphs. In this paper, we argue that simply changing this map representation can bring notable benefits to graph learning tasks. Drawing inspiration from recent progress in geometry processing, we introduce a spectral representation for maps that is easy to integrate into existing graph learning models. This spectral representation is a compact and straightforward plug-in replacement, and is robust to topological changes of the graphs. Remarkably, the representation exhibits structural properties that make it interpretable, drawing an analogy with recent results on smooth manifolds. We demonstrate the benefits of incorporating spectral maps in graph learning pipelines, addressing scenarios where a node-to-node map is not well defined, or in the absence of exact isomorphism. Our approach bears practical benefits in knowledge distillation and hierarchical learning, where we show comparable or improved performance at a fraction of the computational cost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v228-pegoraro24a, title = {Spectral Maps for Learning on Subgraphs}, author = {Pegoraro, Marco and Marin, Riccardo and Rampini, Arianna and Melzi, Simone and Cosmo, Luca and Rodol\`{a}, Emanuele}, booktitle = {Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {183--205}, year = {2024}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Miolane, Nina}, volume = {228}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v228/main/assets/pegoraro24a/pegoraro24a.pdf}, url = {https://proceedings.mlr.press/v228/pegoraro24a.html}, abstract = {In graph learning, maps between graphs and their subgraphs frequently arise. For instance, when coarsening or rewiring operations are present along the pipeline, one needs to keep track of the corresponding nodes between the original and modified graphs. Classically, these maps are represented as binary node-to-node correspondence matrices, and used as-is to transfer node-wise features between the graphs. In this paper, we argue that simply changing this map representation can bring notable benefits to graph learning tasks. Drawing inspiration from recent progress in geometry processing, we introduce a spectral representation for maps that is easy to integrate into existing graph learning models. This spectral representation is a compact and straightforward plug-in replacement, and is robust to topological changes of the graphs. Remarkably, the representation exhibits structural properties that make it interpretable, drawing an analogy with recent results on smooth manifolds. We demonstrate the benefits of incorporating spectral maps in graph learning pipelines, addressing scenarios where a node-to-node map is not well defined, or in the absence of exact isomorphism. Our approach bears practical benefits in knowledge distillation and hierarchical learning, where we show comparable or improved performance at a fraction of the computational cost.} }
Endnote
%0 Conference Paper %T Spectral Maps for Learning on Subgraphs %A Marco Pegoraro %A Riccardo Marin %A Arianna Rampini %A Simone Melzi %A Luca Cosmo %A Emanuele Rodolà %B Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2024 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Nina Miolane %F pmlr-v228-pegoraro24a %I PMLR %P 183--205 %U https://proceedings.mlr.press/v228/pegoraro24a.html %V 228 %X In graph learning, maps between graphs and their subgraphs frequently arise. For instance, when coarsening or rewiring operations are present along the pipeline, one needs to keep track of the corresponding nodes between the original and modified graphs. Classically, these maps are represented as binary node-to-node correspondence matrices, and used as-is to transfer node-wise features between the graphs. In this paper, we argue that simply changing this map representation can bring notable benefits to graph learning tasks. Drawing inspiration from recent progress in geometry processing, we introduce a spectral representation for maps that is easy to integrate into existing graph learning models. This spectral representation is a compact and straightforward plug-in replacement, and is robust to topological changes of the graphs. Remarkably, the representation exhibits structural properties that make it interpretable, drawing an analogy with recent results on smooth manifolds. We demonstrate the benefits of incorporating spectral maps in graph learning pipelines, addressing scenarios where a node-to-node map is not well defined, or in the absence of exact isomorphism. Our approach bears practical benefits in knowledge distillation and hierarchical learning, where we show comparable or improved performance at a fraction of the computational cost.
APA
Pegoraro, M., Marin, R., Rampini, A., Melzi, S., Cosmo, L. & Rodolà, E.. (2024). Spectral Maps for Learning on Subgraphs. Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 228:183-205 Available from https://proceedings.mlr.press/v228/pegoraro24a.html.

Related Material