Machine Unlearning via Algorithmic Stability

Enayat Ullah, Tung Mai, Anup Rao, Ryan A. Rossi, Raman Arora
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:4126-4142, 2021.

Abstract

We study the problem of machine unlearning and identify a notion of algorithmic stability, Total Variation (TV) stability, which we argue, is suitable for the goal of exact unlearning. For convex risk minimization problems, we design TV-stable algorithms based on noisy Stochastic Gradient Descent (SGD). Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a near-maximal coupling of Markov chains for the noisy SGD procedure. To understand the trade-offs between accuracy and unlearning efficiency, we give upper and lower bounds on excess empirical and populations risk of TV stable algorithms for convex risk minimization. Our techniques generalize to arbitrary non-convex functions, and our algorithms are differentially private as well.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-ullah21a, title = {Machine Unlearning via Algorithmic Stability}, author = {Ullah, Enayat and Mai, Tung and Rao, Anup and Rossi, Ryan A. and Arora, Raman}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {4126--4142}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/ullah21a/ullah21a.pdf}, url = {https://proceedings.mlr.press/v134/ullah21a.html}, abstract = {We study the problem of machine unlearning and identify a notion of algorithmic stability, Total Variation (TV) stability, which we argue, is suitable for the goal of exact unlearning. For convex risk minimization problems, we design TV-stable algorithms based on noisy Stochastic Gradient Descent (SGD). Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a near-maximal coupling of Markov chains for the noisy SGD procedure. To understand the trade-offs between accuracy and unlearning efficiency, we give upper and lower bounds on excess empirical and populations risk of TV stable algorithms for convex risk minimization. Our techniques generalize to arbitrary non-convex functions, and our algorithms are differentially private as well.} }
Endnote
%0 Conference Paper %T Machine Unlearning via Algorithmic Stability %A Enayat Ullah %A Tung Mai %A Anup Rao %A Ryan A. Rossi %A Raman Arora %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-ullah21a %I PMLR %P 4126--4142 %U https://proceedings.mlr.press/v134/ullah21a.html %V 134 %X We study the problem of machine unlearning and identify a notion of algorithmic stability, Total Variation (TV) stability, which we argue, is suitable for the goal of exact unlearning. For convex risk minimization problems, we design TV-stable algorithms based on noisy Stochastic Gradient Descent (SGD). Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a near-maximal coupling of Markov chains for the noisy SGD procedure. To understand the trade-offs between accuracy and unlearning efficiency, we give upper and lower bounds on excess empirical and populations risk of TV stable algorithms for convex risk minimization. Our techniques generalize to arbitrary non-convex functions, and our algorithms are differentially private as well.
APA
Ullah, E., Mai, T., Rao, A., Rossi, R.A. & Arora, R.. (2021). Machine Unlearning via Algorithmic Stability. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:4126-4142 Available from https://proceedings.mlr.press/v134/ullah21a.html.

Related Material