Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection

Weilin Cong, Mehrdad Mahdavi
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6674-6703, 2023.

Abstract

As privacy protection receives much attention, unlearning the effect of a specific node from a pre-trained graph learning model has become equally important. However, due to the node dependency in the graph-structured data, representation unlearning in Graph Neural Networks (GNNs) is challenging and less well explored. In this paper, we fill in this gap by first studying the unlearning problem in linear-GNNs, and then introducing its extension to non-linear structures. Given a set of nodes to unlearn, we propose Projector that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten. Projector could overcome the challenges caused by node dependency and enjoys perfect data removal, i.e., the unlearned model parameters do not contain any information about the unlearned node features which is guaranteed by algorithmic construction. Empirical results on real-world datasets illustrate the effectiveness and efficiency of Projector.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-cong23a, title = {Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection}, author = {Cong, Weilin and Mahdavi, Mehrdad}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {6674--6703}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/cong23a/cong23a.pdf}, url = {https://proceedings.mlr.press/v206/cong23a.html}, abstract = {As privacy protection receives much attention, unlearning the effect of a specific node from a pre-trained graph learning model has become equally important. However, due to the node dependency in the graph-structured data, representation unlearning in Graph Neural Networks (GNNs) is challenging and less well explored. In this paper, we fill in this gap by first studying the unlearning problem in linear-GNNs, and then introducing its extension to non-linear structures. Given a set of nodes to unlearn, we propose Projector that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten. Projector could overcome the challenges caused by node dependency and enjoys perfect data removal, i.e., the unlearned model parameters do not contain any information about the unlearned node features which is guaranteed by algorithmic construction. Empirical results on real-world datasets illustrate the effectiveness and efficiency of Projector.} }
Endnote
%0 Conference Paper %T Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection %A Weilin Cong %A Mehrdad Mahdavi %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-cong23a %I PMLR %P 6674--6703 %U https://proceedings.mlr.press/v206/cong23a.html %V 206 %X As privacy protection receives much attention, unlearning the effect of a specific node from a pre-trained graph learning model has become equally important. However, due to the node dependency in the graph-structured data, representation unlearning in Graph Neural Networks (GNNs) is challenging and less well explored. In this paper, we fill in this gap by first studying the unlearning problem in linear-GNNs, and then introducing its extension to non-linear structures. Given a set of nodes to unlearn, we propose Projector that unlearns by projecting the weight parameters of the pre-trained model onto a subspace that is irrelevant to features of the nodes to be forgotten. Projector could overcome the challenges caused by node dependency and enjoys perfect data removal, i.e., the unlearned model parameters do not contain any information about the unlearned node features which is guaranteed by algorithmic construction. Empirical results on real-world datasets illustrate the effectiveness and efficiency of Projector.
APA
Cong, W. & Mahdavi, M.. (2023). Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:6674-6703 Available from https://proceedings.mlr.press/v206/cong23a.html.

Related Material