Empirical Study of Federated Unlearning: Efficiency and Effectiveness

Thai-Hung Nguyen, Hong-Phuc Vu, Dung Thuy Nguyen, Tuan Minh Nguyen, Khoa D Doan, Kok-Seng Wong
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:959-974, 2024.

Abstract

The right to be forgotten (RTBF) is a concept that pertains to an individual’s right to request the removal or deletion of their personal information when it is no longer necessary, relevant, or accurate for the purposes for which it was initially collected. Machine Learning (ML) models often rely on large, diverse datasets for optimal performance. Hence, when an individual exercises the RTBF, it can impact the ML model’s performance and accuracy. In the context of Federated Learning (FL), where a server trains a model across multiple decentralized devices without moving data away from clients, implementing the RTBF in FL presents some unique challenges compared to traditional ML approaches. For instance, the decentralized nature makes it challenging to identify and remove specific user data from the model. Although various unlearning methods have been proposed in the literature, they have not been well investigated from the efficiency perspective. To fill this gap, this paper presents an empirical study to investigate the impacts of various unlearning methods. Our experiments are designed in diverse scenarios involving multiple communication and unlearning rounds using three datasets, MNIST, CIFAR-10, and CIFAR-100. We utilize backdoor attack and Cosine Similarity to assess the effectiveness of each unlearning method. The findings and insights from this research can be integrated into FL systems to enhance their overall performance and effectiveness. Our research codes are available on GitHub at \url{https://github.com/sail-research/fed-unlearn}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-nguyen24a, title = {Empirical Study of Federated Unlearning: {E}fficiency and Effectiveness}, author = {Nguyen, Thai-Hung and Vu, Hong-Phuc and Nguyen, Dung Thuy and Nguyen, Tuan Minh and Doan, Khoa D and Wong, Kok-Seng}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {959--974}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/nguyen24a/nguyen24a.pdf}, url = {https://proceedings.mlr.press/v222/nguyen24a.html}, abstract = {The right to be forgotten (RTBF) is a concept that pertains to an individual’s right to request the removal or deletion of their personal information when it is no longer necessary, relevant, or accurate for the purposes for which it was initially collected. Machine Learning (ML) models often rely on large, diverse datasets for optimal performance. Hence, when an individual exercises the RTBF, it can impact the ML model’s performance and accuracy. In the context of Federated Learning (FL), where a server trains a model across multiple decentralized devices without moving data away from clients, implementing the RTBF in FL presents some unique challenges compared to traditional ML approaches. For instance, the decentralized nature makes it challenging to identify and remove specific user data from the model. Although various unlearning methods have been proposed in the literature, they have not been well investigated from the efficiency perspective. To fill this gap, this paper presents an empirical study to investigate the impacts of various unlearning methods. Our experiments are designed in diverse scenarios involving multiple communication and unlearning rounds using three datasets, MNIST, CIFAR-10, and CIFAR-100. We utilize backdoor attack and Cosine Similarity to assess the effectiveness of each unlearning method. The findings and insights from this research can be integrated into FL systems to enhance their overall performance and effectiveness. Our research codes are available on GitHub at \url{https://github.com/sail-research/fed-unlearn}.} }
Endnote
%0 Conference Paper %T Empirical Study of Federated Unlearning: Efficiency and Effectiveness %A Thai-Hung Nguyen %A Hong-Phuc Vu %A Dung Thuy Nguyen %A Tuan Minh Nguyen %A Khoa D Doan %A Kok-Seng Wong %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-nguyen24a %I PMLR %P 959--974 %U https://proceedings.mlr.press/v222/nguyen24a.html %V 222 %X The right to be forgotten (RTBF) is a concept that pertains to an individual’s right to request the removal or deletion of their personal information when it is no longer necessary, relevant, or accurate for the purposes for which it was initially collected. Machine Learning (ML) models often rely on large, diverse datasets for optimal performance. Hence, when an individual exercises the RTBF, it can impact the ML model’s performance and accuracy. In the context of Federated Learning (FL), where a server trains a model across multiple decentralized devices without moving data away from clients, implementing the RTBF in FL presents some unique challenges compared to traditional ML approaches. For instance, the decentralized nature makes it challenging to identify and remove specific user data from the model. Although various unlearning methods have been proposed in the literature, they have not been well investigated from the efficiency perspective. To fill this gap, this paper presents an empirical study to investigate the impacts of various unlearning methods. Our experiments are designed in diverse scenarios involving multiple communication and unlearning rounds using three datasets, MNIST, CIFAR-10, and CIFAR-100. We utilize backdoor attack and Cosine Similarity to assess the effectiveness of each unlearning method. The findings and insights from this research can be integrated into FL systems to enhance their overall performance and effectiveness. Our research codes are available on GitHub at \url{https://github.com/sail-research/fed-unlearn}.
APA
Nguyen, T., Vu, H., Nguyen, D.T., Nguyen, T.M., Doan, K.D. & Wong, K.. (2024). Empirical Study of Federated Unlearning: Efficiency and Effectiveness. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:959-974 Available from https://proceedings.mlr.press/v222/nguyen24a.html.

Related Material