GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets

Shubham Gupta, Sahil Manchanda, Sayan Ranu, Srikanta J. Bedathur
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:12165-12181, 2023.

Abstract

Graph neural networks (GNNs), in general, are built on the assumption of a static set of features characterizing each node in a graph. This assumption is often violated in practice. Existing methods partly address this issue through feature imputation. However, these techniques (i) assume uniformity of feature set across nodes, (ii) are transductive by nature, and (iii) fail to work when features are added or removed over time. In this work, we address these limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a novel allotropic transformation on the original graph, wherein the nodes and features are decoupled through a bipartite encoding. Through a carefully chosen message passing framework on the allotropic transformation, we make the model parameter size independent of the number of features and thereby inductive to both unseen nodes and features. We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen features does not come at the cost of expressivity. In addition, as demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN with high empirical efficacy and the ability to learn in continual fashion over streaming feature sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-gupta23b, title = {{GRAFENNE}: Learning on Graphs with Heterogeneous and Dynamic Feature Sets}, author = {Gupta, Shubham and Manchanda, Sahil and Ranu, Sayan and Bedathur, Srikanta J.}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {12165--12181}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/gupta23b/gupta23b.pdf}, url = {https://proceedings.mlr.press/v202/gupta23b.html}, abstract = {Graph neural networks (GNNs), in general, are built on the assumption of a static set of features characterizing each node in a graph. This assumption is often violated in practice. Existing methods partly address this issue through feature imputation. However, these techniques (i) assume uniformity of feature set across nodes, (ii) are transductive by nature, and (iii) fail to work when features are added or removed over time. In this work, we address these limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a novel allotropic transformation on the original graph, wherein the nodes and features are decoupled through a bipartite encoding. Through a carefully chosen message passing framework on the allotropic transformation, we make the model parameter size independent of the number of features and thereby inductive to both unseen nodes and features. We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen features does not come at the cost of expressivity. In addition, as demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN with high empirical efficacy and the ability to learn in continual fashion over streaming feature sets.} }
Endnote
%0 Conference Paper %T GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets %A Shubham Gupta %A Sahil Manchanda %A Sayan Ranu %A Srikanta J. Bedathur %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-gupta23b %I PMLR %P 12165--12181 %U https://proceedings.mlr.press/v202/gupta23b.html %V 202 %X Graph neural networks (GNNs), in general, are built on the assumption of a static set of features characterizing each node in a graph. This assumption is often violated in practice. Existing methods partly address this issue through feature imputation. However, these techniques (i) assume uniformity of feature set across nodes, (ii) are transductive by nature, and (iii) fail to work when features are added or removed over time. In this work, we address these limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a novel allotropic transformation on the original graph, wherein the nodes and features are decoupled through a bipartite encoding. Through a carefully chosen message passing framework on the allotropic transformation, we make the model parameter size independent of the number of features and thereby inductive to both unseen nodes and features. We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen features does not come at the cost of expressivity. In addition, as demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN with high empirical efficacy and the ability to learn in continual fashion over streaming feature sets.
APA
Gupta, S., Manchanda, S., Ranu, S. & Bedathur, S.J.. (2023). GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:12165-12181 Available from https://proceedings.mlr.press/v202/gupta23b.html.

Related Material