[edit]
Empirical Study of Federated Unlearning: Efficiency and Effectiveness
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:959-974, 2024.
Abstract
The right to be forgotten (RTBF) is a concept that pertains to an individual’s right to request the removal or deletion of their personal information when it is no longer necessary, relevant, or accurate for the purposes for which it was initially collected. Machine Learning (ML) models often rely on large, diverse datasets for optimal performance. Hence, when an individual exercises the RTBF, it can impact the ML model’s performance and accuracy. In the context of Federated Learning (FL), where a server trains a model across multiple decentralized devices without moving data away from clients, implementing the RTBF in FL presents some unique challenges compared to traditional ML approaches. For instance, the decentralized nature makes it challenging to identify and remove specific user data from the model. Although various unlearning methods have been proposed in the literature, they have not been well investigated from the efficiency perspective. To fill this gap, this paper presents an empirical study to investigate the impacts of various unlearning methods. Our experiments are designed in diverse scenarios involving multiple communication and unlearning rounds using three datasets, MNIST, CIFAR-10, and CIFAR-100. We utilize backdoor attack and Cosine Similarity to assess the effectiveness of each unlearning method. The findings and insights from this research can be integrated into FL systems to enhance their overall performance and effectiveness. Our research codes are available on GitHub at \url{https://github.com/sail-research/fed-unlearn}.