LMEraser: Large Model Unlearning via Adaptive Prompt Tuning

Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2026-2034, 2025.

Abstract

To address the growing demand for privacy protection in machine learning, we propose an efficient and exact machine unlearning method for Large Models, called LMEraser. LMEraser takes a divide-and-conquer strategy with an adaptive prompt tuning mechanism to isolate data influence effectively. The training dataset is partitioned into public and private datasets. Public data are used to train the backbone of the model. Private data are clustered based on their diversity, and each cluster tunes a tailored prompt independently. This approach enables targeted unlearning by updating affected prompts, significantly reduces unlearning costs and maintains high model performance. Evaluations show that LMEraser reduces unlearning costs by 100 times compared to prior work without compromising model utility.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-xu25e, title = {LMEraser: Large Model Unlearning via Adaptive Prompt Tuning}, author = {Xu, Jie and Wu, Zihan and Wang, Cong and Jia, Xiaohua}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2026--2034}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/xu25e/xu25e.pdf}, url = {https://proceedings.mlr.press/v258/xu25e.html}, abstract = {To address the growing demand for privacy protection in machine learning, we propose an efficient and exact machine unlearning method for Large Models, called LMEraser. LMEraser takes a divide-and-conquer strategy with an adaptive prompt tuning mechanism to isolate data influence effectively. The training dataset is partitioned into public and private datasets. Public data are used to train the backbone of the model. Private data are clustered based on their diversity, and each cluster tunes a tailored prompt independently. This approach enables targeted unlearning by updating affected prompts, significantly reduces unlearning costs and maintains high model performance. Evaluations show that LMEraser reduces unlearning costs by 100 times compared to prior work without compromising model utility.} }
Endnote
%0 Conference Paper %T LMEraser: Large Model Unlearning via Adaptive Prompt Tuning %A Jie Xu %A Zihan Wu %A Cong Wang %A Xiaohua Jia %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-xu25e %I PMLR %P 2026--2034 %U https://proceedings.mlr.press/v258/xu25e.html %V 258 %X To address the growing demand for privacy protection in machine learning, we propose an efficient and exact machine unlearning method for Large Models, called LMEraser. LMEraser takes a divide-and-conquer strategy with an adaptive prompt tuning mechanism to isolate data influence effectively. The training dataset is partitioned into public and private datasets. Public data are used to train the backbone of the model. Private data are clustered based on their diversity, and each cluster tunes a tailored prompt independently. This approach enables targeted unlearning by updating affected prompts, significantly reduces unlearning costs and maintains high model performance. Evaluations show that LMEraser reduces unlearning costs by 100 times compared to prior work without compromising model utility.
APA
Xu, J., Wu, Z., Wang, C. & Jia, X.. (2025). LMEraser: Large Model Unlearning via Adaptive Prompt Tuning. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2026-2034 Available from https://proceedings.mlr.press/v258/xu25e.html.

Related Material