A Cognac Shot To Forget Bad Memories: Corrective Unlearning for Graph Neural Networks

Varshita Kolipaka, Akshit Sinha, Debangan Mishra, Sumit Kumar, Arvindh Arun, Shashwat Goel, Ponnurangam Kumaraguru
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:31223-31248, 2025.

Abstract

Graph Neural Networks (GNNs) are increasingly being used for a variety of ML applications on graph data. Because graph data does not follow the independently and identically distributed i.i.d. assumption, adversarial manipulations or incorrect data can propagate to other data points through message passing, which deteriorates the model’s performance. To allow model developers to remove the adverse effects of manipulated entities from a trained GNN, we study the recently formulated problem of Corrective Unlearning. We find that current graph unlearning methods fail to unlearn the effect of manipulations even when the whole manipulated set is known. We introduce a new graph unlearning method,Cognac, which can unlearn the effect of the manipulation set even when only $5$% of it is identified. It recovers most of the performance of a strong oracle with fully corrected training data, even beating retraining from scratch without the deletion set, and is $8$x more efficient while also scaling to large datasets. We hope our work assists GNN developers in mitigating harmful effects caused by issues in real-world data, post-training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kolipaka25a, title = {A Cognac Shot To Forget Bad Memories: Corrective Unlearning for Graph Neural Networks}, author = {Kolipaka, Varshita and Sinha, Akshit and Mishra, Debangan and Kumar, Sumit and Arun, Arvindh and Goel, Shashwat and Kumaraguru, Ponnurangam}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {31223--31248}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kolipaka25a/kolipaka25a.pdf}, url = {https://proceedings.mlr.press/v267/kolipaka25a.html}, abstract = {Graph Neural Networks (GNNs) are increasingly being used for a variety of ML applications on graph data. Because graph data does not follow the independently and identically distributed i.i.d. assumption, adversarial manipulations or incorrect data can propagate to other data points through message passing, which deteriorates the model’s performance. To allow model developers to remove the adverse effects of manipulated entities from a trained GNN, we study the recently formulated problem of Corrective Unlearning. We find that current graph unlearning methods fail to unlearn the effect of manipulations even when the whole manipulated set is known. We introduce a new graph unlearning method,Cognac, which can unlearn the effect of the manipulation set even when only $5$% of it is identified. It recovers most of the performance of a strong oracle with fully corrected training data, even beating retraining from scratch without the deletion set, and is $8$x more efficient while also scaling to large datasets. We hope our work assists GNN developers in mitigating harmful effects caused by issues in real-world data, post-training.} }
Endnote
%0 Conference Paper %T A Cognac Shot To Forget Bad Memories: Corrective Unlearning for Graph Neural Networks %A Varshita Kolipaka %A Akshit Sinha %A Debangan Mishra %A Sumit Kumar %A Arvindh Arun %A Shashwat Goel %A Ponnurangam Kumaraguru %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kolipaka25a %I PMLR %P 31223--31248 %U https://proceedings.mlr.press/v267/kolipaka25a.html %V 267 %X Graph Neural Networks (GNNs) are increasingly being used for a variety of ML applications on graph data. Because graph data does not follow the independently and identically distributed i.i.d. assumption, adversarial manipulations or incorrect data can propagate to other data points through message passing, which deteriorates the model’s performance. To allow model developers to remove the adverse effects of manipulated entities from a trained GNN, we study the recently formulated problem of Corrective Unlearning. We find that current graph unlearning methods fail to unlearn the effect of manipulations even when the whole manipulated set is known. We introduce a new graph unlearning method,Cognac, which can unlearn the effect of the manipulation set even when only $5$% of it is identified. It recovers most of the performance of a strong oracle with fully corrected training data, even beating retraining from scratch without the deletion set, and is $8$x more efficient while also scaling to large datasets. We hope our work assists GNN developers in mitigating harmful effects caused by issues in real-world data, post-training.
APA
Kolipaka, V., Sinha, A., Mishra, D., Kumar, S., Arun, A., Goel, S. & Kumaraguru, P.. (2025). A Cognac Shot To Forget Bad Memories: Corrective Unlearning for Graph Neural Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:31223-31248 Available from https://proceedings.mlr.press/v267/kolipaka25a.html.

Related Material