PDUDT: Provable Decentralized Unlearning under Dynamic Topologies

Jing Qiao, Yu Liu, Zengzhe Chen, Mingyi Li, Yuan Yuan, Xiao Zhang, Dongxiao Yu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:50142-50170, 2025.

Abstract

This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client’s impact, while the dynamic topology limits the scalability of retraining-based unlearning methods. In this paper, we propose the first Provable Decentralized Unlearning algorithm under Dynamic Topologies called PDUDT. It allows clients to eliminate the influence of a specific client without additional communication or retraining. We provide rigorous theoretical guarantees for PDUDT, showing it is statistically indistinguishable from perturbed retraining. Additionally, it achieves an efficient convergence rate of $\mathcal{O}(\frac{1}{T})$ in subsequent learning, where $T$ is the total communication rounds. This rate matches state-of-the-art results. Experimental results show that compared with the Retrain method, PDUDT saves more than 99% of unlearning time while achieving comparable unlearning performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-qiao25c, title = {{PDUDT}: Provable Decentralized Unlearning under Dynamic Topologies}, author = {Qiao, Jing and Liu, Yu and Chen, Zengzhe and Li, Mingyi and Yuan, Yuan and Zhang, Xiao and Yu, Dongxiao}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {50142--50170}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/qiao25c/qiao25c.pdf}, url = {https://proceedings.mlr.press/v267/qiao25c.html}, abstract = {This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client’s impact, while the dynamic topology limits the scalability of retraining-based unlearning methods. In this paper, we propose the first Provable Decentralized Unlearning algorithm under Dynamic Topologies called PDUDT. It allows clients to eliminate the influence of a specific client without additional communication or retraining. We provide rigorous theoretical guarantees for PDUDT, showing it is statistically indistinguishable from perturbed retraining. Additionally, it achieves an efficient convergence rate of $\mathcal{O}(\frac{1}{T})$ in subsequent learning, where $T$ is the total communication rounds. This rate matches state-of-the-art results. Experimental results show that compared with the Retrain method, PDUDT saves more than 99% of unlearning time while achieving comparable unlearning performance.} }
Endnote
%0 Conference Paper %T PDUDT: Provable Decentralized Unlearning under Dynamic Topologies %A Jing Qiao %A Yu Liu %A Zengzhe Chen %A Mingyi Li %A Yuan Yuan %A Xiao Zhang %A Dongxiao Yu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-qiao25c %I PMLR %P 50142--50170 %U https://proceedings.mlr.press/v267/qiao25c.html %V 267 %X This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client’s impact, while the dynamic topology limits the scalability of retraining-based unlearning methods. In this paper, we propose the first Provable Decentralized Unlearning algorithm under Dynamic Topologies called PDUDT. It allows clients to eliminate the influence of a specific client without additional communication or retraining. We provide rigorous theoretical guarantees for PDUDT, showing it is statistically indistinguishable from perturbed retraining. Additionally, it achieves an efficient convergence rate of $\mathcal{O}(\frac{1}{T})$ in subsequent learning, where $T$ is the total communication rounds. This rate matches state-of-the-art results. Experimental results show that compared with the Retrain method, PDUDT saves more than 99% of unlearning time while achieving comparable unlearning performance.
APA
Qiao, J., Liu, Y., Chen, Z., Li, M., Yuan, Y., Zhang, X. & Yu, D.. (2025). PDUDT: Provable Decentralized Unlearning under Dynamic Topologies. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:50142-50170 Available from https://proceedings.mlr.press/v267/qiao25c.html.

Related Material