Countering Relearning with Perception Revising Unlearning

Chenhao Zhang, Weitong Chen, Wei Emma Zhang, Miao Xu
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:1336-1351, 2025.

Abstract

Unlearning methods that rely solely on forgetting data typically modify the network’s decision boundary to achieve unlearning. However, these approaches are susceptible to the "relearning" problem, whereby the network may recall the forgotten class upon subsequent updates with the remaining class data. Our experimental analysis reveals that, although these modifications alter the decision boundary, the network’s fundamental perception of the samples remains mostly unchanged. In response to the relearning problem, we introduce the Perception Revising Unlearning (PRU) framework. PRU employs a probability redistribution method, which assigns new labels and more precise supervision information to each forgetting class instance. The PRU actively shifts the network’s perception of forgetting class samples toward other remaining classes. The experimental results demonstrate that PRU not only has good classification effectiveness but also significantly reduces the risk of relearning, suggesting a robust approach to class unlearning tasks that depend solely on forgetting data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-zhang25d, title = {Countering Relearning with Perception Revising Unlearning}, author = {Zhang, Chenhao and Chen, Weitong and Zhang, Wei Emma and Xu, Miao}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {1336--1351}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/zhang25d/zhang25d.pdf}, url = {https://proceedings.mlr.press/v260/zhang25d.html}, abstract = {Unlearning methods that rely solely on forgetting data typically modify the network’s decision boundary to achieve unlearning. However, these approaches are susceptible to the "relearning" problem, whereby the network may recall the forgotten class upon subsequent updates with the remaining class data. Our experimental analysis reveals that, although these modifications alter the decision boundary, the network’s fundamental perception of the samples remains mostly unchanged. In response to the relearning problem, we introduce the Perception Revising Unlearning (PRU) framework. PRU employs a probability redistribution method, which assigns new labels and more precise supervision information to each forgetting class instance. The PRU actively shifts the network’s perception of forgetting class samples toward other remaining classes. The experimental results demonstrate that PRU not only has good classification effectiveness but also significantly reduces the risk of relearning, suggesting a robust approach to class unlearning tasks that depend solely on forgetting data.} }
Endnote
%0 Conference Paper %T Countering Relearning with Perception Revising Unlearning %A Chenhao Zhang %A Weitong Chen %A Wei Emma Zhang %A Miao Xu %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-zhang25d %I PMLR %P 1336--1351 %U https://proceedings.mlr.press/v260/zhang25d.html %V 260 %X Unlearning methods that rely solely on forgetting data typically modify the network’s decision boundary to achieve unlearning. However, these approaches are susceptible to the "relearning" problem, whereby the network may recall the forgotten class upon subsequent updates with the remaining class data. Our experimental analysis reveals that, although these modifications alter the decision boundary, the network’s fundamental perception of the samples remains mostly unchanged. In response to the relearning problem, we introduce the Perception Revising Unlearning (PRU) framework. PRU employs a probability redistribution method, which assigns new labels and more precise supervision information to each forgetting class instance. The PRU actively shifts the network’s perception of forgetting class samples toward other remaining classes. The experimental results demonstrate that PRU not only has good classification effectiveness but also significantly reduces the risk of relearning, suggesting a robust approach to class unlearning tasks that depend solely on forgetting data.
APA
Zhang, C., Chen, W., Zhang, W.E. & Xu, M.. (2025). Countering Relearning with Perception Revising Unlearning. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:1336-1351 Available from https://proceedings.mlr.press/v260/zhang25d.html.

Related Material