[edit]
PDUDT: Provable Decentralized Unlearning under Dynamic Topologies
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:50142-50170, 2025.
Abstract
This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client’s impact, while the dynamic topology limits the scalability of retraining-based unlearning methods. In this paper, we propose the first Provable Decentralized Unlearning algorithm under Dynamic Topologies called PDUDT. It allows clients to eliminate the influence of a specific client without additional communication or retraining. We provide rigorous theoretical guarantees for PDUDT, showing it is statistically indistinguishable from perturbed retraining. Additionally, it achieves an efficient convergence rate of $\mathcal{O}(\frac{1}{T})$ in subsequent learning, where $T$ is the total communication rounds. This rate matches state-of-the-art results. Experimental results show that compared with the Retrain method, PDUDT saves more than 99% of unlearning time while achieving comparable unlearning performance.