Motif-Aware Attribute Masking for Molecular Graph Pre-Training

Eric Inae, Gang Liu, Meng Jiang
Proceedings of the Third Learning on Graphs Conference, PMLR 269:41:1-41:15, 2025.

Abstract

Attribute reconstruction is used to predict node or edge features in the pre-training of graph neural networks. Given a large number of molecules, they learn to capture structural knowledge, which is transferable for various downstream property prediction tasks and vital in chemistry, biomedicine, and material science. Previous strategies that randomly select nodes to do attribute masking leverage the information of local neighbors. However, the over-reliance of these neighbors inhibits the model’s ability to learn long-range dependencies from higher-level substructures, such as functional groups or chemical motifs. To explicitly measure and encourage the inter-motif knowledge transfer in pre-trained models, we define inter-motif node influence measures and propose a novel motif-aware attribute masking strategy to capture long-range inter-motif structures by leveraging the information of atoms in neighboring motifs. Once each graph is decomposed into disjoint motifs, the features for every node within a sample motif are masked and subsequently predicted using a graph decoder. We evaluate our approach on eleven molecular classification and regression datasets and demonstrate its advantages.

Cite this Paper


BibTeX
@InProceedings{pmlr-v269-inae25a, title = {Motif-Aware Attribute Masking for Molecular Graph Pre-Training}, author = {Inae, Eric and Liu, Gang and Jiang, Meng}, booktitle = {Proceedings of the Third Learning on Graphs Conference}, pages = {41:1--41:15}, year = {2025}, editor = {Wolf, Guy and Krishnaswamy, Smita}, volume = {269}, series = {Proceedings of Machine Learning Research}, month = {26--29 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v269/main/assets/inae25a/inae25a.pdf}, url = {https://proceedings.mlr.press/v269/inae25a.html}, abstract = {Attribute reconstruction is used to predict node or edge features in the pre-training of graph neural networks. Given a large number of molecules, they learn to capture structural knowledge, which is transferable for various downstream property prediction tasks and vital in chemistry, biomedicine, and material science. Previous strategies that randomly select nodes to do attribute masking leverage the information of local neighbors. However, the over-reliance of these neighbors inhibits the model’s ability to learn long-range dependencies from higher-level substructures, such as functional groups or chemical motifs. To explicitly measure and encourage the inter-motif knowledge transfer in pre-trained models, we define inter-motif node influence measures and propose a novel motif-aware attribute masking strategy to capture long-range inter-motif structures by leveraging the information of atoms in neighboring motifs. Once each graph is decomposed into disjoint motifs, the features for every node within a sample motif are masked and subsequently predicted using a graph decoder. We evaluate our approach on eleven molecular classification and regression datasets and demonstrate its advantages.} }
Endnote
%0 Conference Paper %T Motif-Aware Attribute Masking for Molecular Graph Pre-Training %A Eric Inae %A Gang Liu %A Meng Jiang %B Proceedings of the Third Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2025 %E Guy Wolf %E Smita Krishnaswamy %F pmlr-v269-inae25a %I PMLR %P 41:1--41:15 %U https://proceedings.mlr.press/v269/inae25a.html %V 269 %X Attribute reconstruction is used to predict node or edge features in the pre-training of graph neural networks. Given a large number of molecules, they learn to capture structural knowledge, which is transferable for various downstream property prediction tasks and vital in chemistry, biomedicine, and material science. Previous strategies that randomly select nodes to do attribute masking leverage the information of local neighbors. However, the over-reliance of these neighbors inhibits the model’s ability to learn long-range dependencies from higher-level substructures, such as functional groups or chemical motifs. To explicitly measure and encourage the inter-motif knowledge transfer in pre-trained models, we define inter-motif node influence measures and propose a novel motif-aware attribute masking strategy to capture long-range inter-motif structures by leveraging the information of atoms in neighboring motifs. Once each graph is decomposed into disjoint motifs, the features for every node within a sample motif are masked and subsequently predicted using a graph decoder. We evaluate our approach on eleven molecular classification and regression datasets and demonstrate its advantages.
APA
Inae, E., Liu, G. & Jiang, M.. (2025). Motif-Aware Attribute Masking for Molecular Graph Pre-Training. Proceedings of the Third Learning on Graphs Conference, in Proceedings of Machine Learning Research 269:41:1-41:15 Available from https://proceedings.mlr.press/v269/inae25a.html.

Related Material