HyperNear: Unnoticeable Node Injection Attacks on Hypergraph Neural Networks

Tingyi Cai, Yunliang Jiang, Ming Li, Lu Bai, Changqin Huang, Yi Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:6193-6210, 2025.

Abstract

With the growing adoption of Hypergraph Neural Networks (HNNs) to model higher-order relationships in complex data, concerns about their security and robustness have become increasingly important. However, current security research often overlooks the unique structural characteristics of hypergraph models when developing adversarial attack and defense strategies. To address this gap, we demonstrate that hypergraphs are particularly vulnerable to node injection attacks, which align closely with real-world applications. Through empirical analysis, we develop a relatively unnoticeable attack approach by monitoring changes in homophily and leveraging this self-regulating property to enhance stealth. Building on these insights, we introduce HyperNear, i.e., $\underline{N}$ode inj$\underline{E}$ction $\underline{A}$ttacks on hype$\underline{R}$graph neural networks, the first node injection attack framework specifically tailored for HNNs. HyperNear integrates homophily-preserving strategies to optimize both stealth and attack effectiveness. Extensive experiments show that HyperNear achieves excellent performance and generalization, marking the first comprehensive study of injection attacks on hypergraphs. Our code is available at https://github.com/ca1man-2022/HyperNear.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cai25b, title = {{H}yper{N}ear: Unnoticeable Node Injection Attacks on Hypergraph Neural Networks}, author = {Cai, Tingyi and Jiang, Yunliang and Li, Ming and Bai, Lu and Huang, Changqin and Wang, Yi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {6193--6210}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cai25b/cai25b.pdf}, url = {https://proceedings.mlr.press/v267/cai25b.html}, abstract = {With the growing adoption of Hypergraph Neural Networks (HNNs) to model higher-order relationships in complex data, concerns about their security and robustness have become increasingly important. However, current security research often overlooks the unique structural characteristics of hypergraph models when developing adversarial attack and defense strategies. To address this gap, we demonstrate that hypergraphs are particularly vulnerable to node injection attacks, which align closely with real-world applications. Through empirical analysis, we develop a relatively unnoticeable attack approach by monitoring changes in homophily and leveraging this self-regulating property to enhance stealth. Building on these insights, we introduce HyperNear, i.e., $\underline{N}$ode inj$\underline{E}$ction $\underline{A}$ttacks on hype$\underline{R}$graph neural networks, the first node injection attack framework specifically tailored for HNNs. HyperNear integrates homophily-preserving strategies to optimize both stealth and attack effectiveness. Extensive experiments show that HyperNear achieves excellent performance and generalization, marking the first comprehensive study of injection attacks on hypergraphs. Our code is available at https://github.com/ca1man-2022/HyperNear.} }
Endnote
%0 Conference Paper %T HyperNear: Unnoticeable Node Injection Attacks on Hypergraph Neural Networks %A Tingyi Cai %A Yunliang Jiang %A Ming Li %A Lu Bai %A Changqin Huang %A Yi Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cai25b %I PMLR %P 6193--6210 %U https://proceedings.mlr.press/v267/cai25b.html %V 267 %X With the growing adoption of Hypergraph Neural Networks (HNNs) to model higher-order relationships in complex data, concerns about their security and robustness have become increasingly important. However, current security research often overlooks the unique structural characteristics of hypergraph models when developing adversarial attack and defense strategies. To address this gap, we demonstrate that hypergraphs are particularly vulnerable to node injection attacks, which align closely with real-world applications. Through empirical analysis, we develop a relatively unnoticeable attack approach by monitoring changes in homophily and leveraging this self-regulating property to enhance stealth. Building on these insights, we introduce HyperNear, i.e., $\underline{N}$ode inj$\underline{E}$ction $\underline{A}$ttacks on hype$\underline{R}$graph neural networks, the first node injection attack framework specifically tailored for HNNs. HyperNear integrates homophily-preserving strategies to optimize both stealth and attack effectiveness. Extensive experiments show that HyperNear achieves excellent performance and generalization, marking the first comprehensive study of injection attacks on hypergraphs. Our code is available at https://github.com/ca1man-2022/HyperNear.
APA
Cai, T., Jiang, Y., Li, M., Bai, L., Huang, C. & Wang, Y.. (2025). HyperNear: Unnoticeable Node Injection Attacks on Hypergraph Neural Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:6193-6210 Available from https://proceedings.mlr.press/v267/cai25b.html.

Related Material