Leveraging Per-Instance Privacy for Machine Unlearning

Nazanin Mohammadi Sepahvand, Anvith Thudi, Berivan Isik, Ashmita Bhattacharyya, Nicolas Papernot, Eleni Triantafillou, Daniel M. Roy, Gintare Karolina Dziugaite
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:53906-53922, 2025.

Abstract

We present a principled, per-instance approach to quantifying the difficulty of unlearning via fine-tuning. We begin by sharpening an analysis of noisy gradient descent for unlearning (Chien et al., 2024), obtaining a better utility–unlearning trade-off by replacing worst-case privacy loss bounds with per-instance privacy losses (Thudi et al., 2024), each of which bounds the (R ényi) divergence to retraining without an individual datapoint. To demonstrate the practical applicability of our theory, we present empirical results showing that our theoretical predictions are born out both for Stochastic Gradient Langevin Dynamics (SGLD) as well as for standard fine-tuning without explicit noise. We further demonstrate that per-instance privacy losses correlate well with several existing data difficulty metrics, while also identifying harder groups of data points, and introduce novel evaluation methods based on loss barriers. All together, our findings provide a foundation for more efficient and adaptive unlearning strategies tailored to the unique properties of individual data points.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-sepahvand25a, title = {Leveraging Per-Instance Privacy for Machine Unlearning}, author = {Sepahvand, Nazanin Mohammadi and Thudi, Anvith and Isik, Berivan and Bhattacharyya, Ashmita and Papernot, Nicolas and Triantafillou, Eleni and Roy, Daniel M. and Dziugaite, Gintare Karolina}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {53906--53922}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/sepahvand25a/sepahvand25a.pdf}, url = {https://proceedings.mlr.press/v267/sepahvand25a.html}, abstract = {We present a principled, per-instance approach to quantifying the difficulty of unlearning via fine-tuning. We begin by sharpening an analysis of noisy gradient descent for unlearning (Chien et al., 2024), obtaining a better utility–unlearning trade-off by replacing worst-case privacy loss bounds with per-instance privacy losses (Thudi et al., 2024), each of which bounds the (R ényi) divergence to retraining without an individual datapoint. To demonstrate the practical applicability of our theory, we present empirical results showing that our theoretical predictions are born out both for Stochastic Gradient Langevin Dynamics (SGLD) as well as for standard fine-tuning without explicit noise. We further demonstrate that per-instance privacy losses correlate well with several existing data difficulty metrics, while also identifying harder groups of data points, and introduce novel evaluation methods based on loss barriers. All together, our findings provide a foundation for more efficient and adaptive unlearning strategies tailored to the unique properties of individual data points.} }
Endnote
%0 Conference Paper %T Leveraging Per-Instance Privacy for Machine Unlearning %A Nazanin Mohammadi Sepahvand %A Anvith Thudi %A Berivan Isik %A Ashmita Bhattacharyya %A Nicolas Papernot %A Eleni Triantafillou %A Daniel M. Roy %A Gintare Karolina Dziugaite %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-sepahvand25a %I PMLR %P 53906--53922 %U https://proceedings.mlr.press/v267/sepahvand25a.html %V 267 %X We present a principled, per-instance approach to quantifying the difficulty of unlearning via fine-tuning. We begin by sharpening an analysis of noisy gradient descent for unlearning (Chien et al., 2024), obtaining a better utility–unlearning trade-off by replacing worst-case privacy loss bounds with per-instance privacy losses (Thudi et al., 2024), each of which bounds the (R ényi) divergence to retraining without an individual datapoint. To demonstrate the practical applicability of our theory, we present empirical results showing that our theoretical predictions are born out both for Stochastic Gradient Langevin Dynamics (SGLD) as well as for standard fine-tuning without explicit noise. We further demonstrate that per-instance privacy losses correlate well with several existing data difficulty metrics, while also identifying harder groups of data points, and introduce novel evaluation methods based on loss barriers. All together, our findings provide a foundation for more efficient and adaptive unlearning strategies tailored to the unique properties of individual data points.
APA
Sepahvand, N.M., Thudi, A., Isik, B., Bhattacharyya, A., Papernot, N., Triantafillou, E., Roy, D.M. & Dziugaite, G.K.. (2025). Leveraging Per-Instance Privacy for Machine Unlearning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:53906-53922 Available from https://proceedings.mlr.press/v267/sepahvand25a.html.

Related Material