AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks

Kai Yuan, Jiahao Zhang, Yidi Wang, pei Xiaobing
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:113-128, 2025.

Abstract

Adversarial attacks on Graph Neural Networks aim to perturb the performance of the learner by carefully modifying the graph topology and node attributes. Existing methods achieve attack stealthiness by constraining the modification budget and differences in graph properties. However, these methods typically disrupt task-relevant primary semantics directly, which results in low defensibility and detectability of the attack. In this paper, we propose an Adversarial Attack on High-level Semantics for Graph Neural Networks (AHSG), which is a graph structure attack model that ensures the retention of primary semantics. By combining latent representations with shared primary semantics, our model retains detectable attributes and relational patterns of the original graph while leveraging more subtle changes to carry out the attack. Then we use the Projected Gradient Descent algorithm to map the latent representations with attack effects to the adversarial graph. Through experiments on robust graph deep learning models equipped with defense strategies, we demonstrate that AHSG outperforms other state-of-the-art methods in attack effectiveness. Additionally, using Contextual Stochastic Block Models to detect the attacked graph further validates that our method preserves the primary semantics of the graph.

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-yuan25a, title = {AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks}, author = {Yuan, Kai and Zhang, Jiahao and Wang, Yidi and Xiaobing, pei}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {113--128}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/yuan25a/yuan25a.pdf}, url = {https://proceedings.mlr.press/v304/yuan25a.html}, abstract = {Adversarial attacks on Graph Neural Networks aim to perturb the performance of the learner by carefully modifying the graph topology and node attributes. Existing methods achieve attack stealthiness by constraining the modification budget and differences in graph properties. However, these methods typically disrupt task-relevant primary semantics directly, which results in low defensibility and detectability of the attack. In this paper, we propose an Adversarial Attack on High-level Semantics for Graph Neural Networks (AHSG), which is a graph structure attack model that ensures the retention of primary semantics. By combining latent representations with shared primary semantics, our model retains detectable attributes and relational patterns of the original graph while leveraging more subtle changes to carry out the attack. Then we use the Projected Gradient Descent algorithm to map the latent representations with attack effects to the adversarial graph. Through experiments on robust graph deep learning models equipped with defense strategies, we demonstrate that AHSG outperforms other state-of-the-art methods in attack effectiveness. Additionally, using Contextual Stochastic Block Models to detect the attacked graph further validates that our method preserves the primary semantics of the graph.} }
Endnote
%0 Conference Paper %T AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks %A Kai Yuan %A Jiahao Zhang %A Yidi Wang %A pei Xiaobing %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-yuan25a %I PMLR %P 113--128 %U https://proceedings.mlr.press/v304/yuan25a.html %V 304 %X Adversarial attacks on Graph Neural Networks aim to perturb the performance of the learner by carefully modifying the graph topology and node attributes. Existing methods achieve attack stealthiness by constraining the modification budget and differences in graph properties. However, these methods typically disrupt task-relevant primary semantics directly, which results in low defensibility and detectability of the attack. In this paper, we propose an Adversarial Attack on High-level Semantics for Graph Neural Networks (AHSG), which is a graph structure attack model that ensures the retention of primary semantics. By combining latent representations with shared primary semantics, our model retains detectable attributes and relational patterns of the original graph while leveraging more subtle changes to carry out the attack. Then we use the Projected Gradient Descent algorithm to map the latent representations with attack effects to the adversarial graph. Through experiments on robust graph deep learning models equipped with defense strategies, we demonstrate that AHSG outperforms other state-of-the-art methods in attack effectiveness. Additionally, using Contextual Stochastic Block Models to detect the attacked graph further validates that our method preserves the primary semantics of the graph.
APA
Yuan, K., Zhang, J., Wang, Y. & Xiaobing, p.. (2025). AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:113-128 Available from https://proceedings.mlr.press/v304/yuan25a.html.

Related Material