GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs

Yue Wang, Qizhou Wang, Feng Liu, Wei Huang, Yali Du, Xiaojiang Du, Bo Han
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:64690-64710, 2025.

Abstract

Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious trade-off between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wang25de, title = {{GRU}: Mitigating the Trade-off between Unlearning and Retention for {LLM}s}, author = {Wang, Yue and Wang, Qizhou and Liu, Feng and Huang, Wei and Du, Yali and Du, Xiaojiang and Han, Bo}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {64690--64710}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wang25de/wang25de.pdf}, url = {https://proceedings.mlr.press/v267/wang25de.html}, abstract = {Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious trade-off between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks.} }
Endnote
%0 Conference Paper %T GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs %A Yue Wang %A Qizhou Wang %A Feng Liu %A Wei Huang %A Yali Du %A Xiaojiang Du %A Bo Han %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wang25de %I PMLR %P 64690--64710 %U https://proceedings.mlr.press/v267/wang25de.html %V 267 %X Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious trade-off between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks.
APA
Wang, Y., Wang, Q., Liu, F., Huang, W., Du, Y., Du, X. & Han, B.. (2025). GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:64690-64710 Available from https://proceedings.mlr.press/v267/wang25de.html.

Related Material