Adversarial Attacks on Node Embeddings via Graph Poisoning

Aleksandar Bojchevski, Stephan Günnemann
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:695-704, 2019.

Abstract

The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods, there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models and are successful even when the attacker is restricted.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-bojchevski19a, title = {Adversarial Attacks on Node Embeddings via Graph Poisoning}, author = {Bojchevski, Aleksandar and G{\"u}nnemann, Stephan}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {695--704}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/bojchevski19a/bojchevski19a.pdf}, url = {https://proceedings.mlr.press/v97/bojchevski19a.html}, abstract = {The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods, there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models and are successful even when the attacker is restricted.} }
Endnote
%0 Conference Paper %T Adversarial Attacks on Node Embeddings via Graph Poisoning %A Aleksandar Bojchevski %A Stephan Günnemann %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-bojchevski19a %I PMLR %P 695--704 %U https://proceedings.mlr.press/v97/bojchevski19a.html %V 97 %X The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods, there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models and are successful even when the attacker is restricted.
APA
Bojchevski, A. & Günnemann, S.. (2019). Adversarial Attacks on Node Embeddings via Graph Poisoning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:695-704 Available from https://proceedings.mlr.press/v97/bojchevski19a.html.

Related Material