Ticketed Learning–Unlearning Schemes

Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush Sekhari, Chiyuan Zhang
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5110-5139, 2023.

Abstract

We consider the learning–unlearning paradigm defined as follows. First given a dataset, the goal is to learn a good predictor, such as one minimizing a certain loss. Subsequently, given any subset of examples that wish to be unlearnt, the goal is to learn, without the knowledge of the original training dataset, a good predictor that is identical to the predictor that would have been produced when learning from scratch on the surviving examples.We propose a new ticketed model for learning–unlearning wherein the learning algorithm can send back additional information in the form of a small-sized (encrypted) “ticket” to each participating training example, in addition to retaining a small amount of “central” information for later. Subsequently, the examples that wish to be unlearnt present their tickets to the unlearning algorithm, which additionally uses the central information to return a new predictor. We provide space-efficient ticketed learning–unlearning schemes for a broad family of concept classes, including thresholds, parities, intersection-closed classes, among others.En route, we introduce the count-to-zero problem, where during unlearning, the goal is to simply know if there are any examples that survived. We give a ticketed learning–unlearning scheme for this problem that relies on the construction of Sperner families with certain properties, which might be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-ghazi23a, title = {Ticketed Learning–Unlearning Schemes}, author = {Ghazi, Badih and Kamath, Pritish and Kumar, Ravi and Manurangsi, Pasin and Sekhari, Ayush and Zhang, Chiyuan}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5110--5139}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/ghazi23a/ghazi23a.pdf}, url = {https://proceedings.mlr.press/v195/ghazi23a.html}, abstract = {We consider the learning–unlearning paradigm defined as follows. First given a dataset, the goal is to learn a good predictor, such as one minimizing a certain loss. Subsequently, given any subset of examples that wish to be unlearnt, the goal is to learn, without the knowledge of the original training dataset, a good predictor that is identical to the predictor that would have been produced when learning from scratch on the surviving examples.We propose a new ticketed model for learning–unlearning wherein the learning algorithm can send back additional information in the form of a small-sized (encrypted) “ticket” to each participating training example, in addition to retaining a small amount of “central” information for later. Subsequently, the examples that wish to be unlearnt present their tickets to the unlearning algorithm, which additionally uses the central information to return a new predictor. We provide space-efficient ticketed learning–unlearning schemes for a broad family of concept classes, including thresholds, parities, intersection-closed classes, among others.En route, we introduce the count-to-zero problem, where during unlearning, the goal is to simply know if there are any examples that survived. We give a ticketed learning–unlearning scheme for this problem that relies on the construction of Sperner families with certain properties, which might be of independent interest.} }
Endnote
%0 Conference Paper %T Ticketed Learning–Unlearning Schemes %A Badih Ghazi %A Pritish Kamath %A Ravi Kumar %A Pasin Manurangsi %A Ayush Sekhari %A Chiyuan Zhang %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-ghazi23a %I PMLR %P 5110--5139 %U https://proceedings.mlr.press/v195/ghazi23a.html %V 195 %X We consider the learning–unlearning paradigm defined as follows. First given a dataset, the goal is to learn a good predictor, such as one minimizing a certain loss. Subsequently, given any subset of examples that wish to be unlearnt, the goal is to learn, without the knowledge of the original training dataset, a good predictor that is identical to the predictor that would have been produced when learning from scratch on the surviving examples.We propose a new ticketed model for learning–unlearning wherein the learning algorithm can send back additional information in the form of a small-sized (encrypted) “ticket” to each participating training example, in addition to retaining a small amount of “central” information for later. Subsequently, the examples that wish to be unlearnt present their tickets to the unlearning algorithm, which additionally uses the central information to return a new predictor. We provide space-efficient ticketed learning–unlearning schemes for a broad family of concept classes, including thresholds, parities, intersection-closed classes, among others.En route, we introduce the count-to-zero problem, where during unlearning, the goal is to simply know if there are any examples that survived. We give a ticketed learning–unlearning scheme for this problem that relies on the construction of Sperner families with certain properties, which might be of independent interest.
APA
Ghazi, B., Kamath, P., Kumar, R., Manurangsi, P., Sekhari, A. & Zhang, C.. (2023). Ticketed Learning–Unlearning Schemes. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5110-5139 Available from https://proceedings.mlr.press/v195/ghazi23a.html.

Related Material