Open Problem: Better Differentially Private Learning Algorithms with Margin Guarantees

Raef Bassily, Mehryar Mohri, Ananda Theertha Suresh
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5638-5643, 2022.

Abstract

The design of efficient differentially private (DP) learning algorithms with dimension-independent learning guarantees has been one of the central challenges in the field of privacy-preserving machine learning. Existing algorithms either suffer from weak generalization guarantees, restrictive model assumptions, or quite large computation cost. In non-private learning, dimension-independent generalization guarantees based on the notion of confidence margin were shown to be the most informative and useful learning guarantees. This motivates a systematic study of DP learning algorithms with confidence-margin generalization guarantees. A recent work has started exploring this direction in the context of linear and kernel-based classification as well as certain classes of neural networks (NNs). Despite showing several positive results, a number of fundamental questions are still open. We identify two major open problems related to DP margin-based learning algorithms. The first problem relates to the design of algorithms with more favorable computational cost. The second one pertains to the question of achieving margin guarantees for NNs under DP with no explicit dependence on the network size.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-open-problem-bassily22a, title = {Open Problem: Better Differentially Private Learning Algorithms with Margin Guarantees}, author = {Bassily, Raef and Mohri, Mehryar and Suresh, Ananda Theertha}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {5638--5643}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/open-problem-bassily22a/open-problem-bassily22a.pdf}, url = {https://proceedings.mlr.press/v178/open-problem-bassily22a.html}, abstract = {The design of efficient differentially private (DP) learning algorithms with dimension-independent learning guarantees has been one of the central challenges in the field of privacy-preserving machine learning. Existing algorithms either suffer from weak generalization guarantees, restrictive model assumptions, or quite large computation cost. In non-private learning, dimension-independent generalization guarantees based on the notion of confidence margin were shown to be the most informative and useful learning guarantees. This motivates a systematic study of DP learning algorithms with confidence-margin generalization guarantees. A recent work has started exploring this direction in the context of linear and kernel-based classification as well as certain classes of neural networks (NNs). Despite showing several positive results, a number of fundamental questions are still open. We identify two major open problems related to DP margin-based learning algorithms. The first problem relates to the design of algorithms with more favorable computational cost. The second one pertains to the question of achieving margin guarantees for NNs under DP with no explicit dependence on the network size.} }
Endnote
%0 Conference Paper %T Open Problem: Better Differentially Private Learning Algorithms with Margin Guarantees %A Raef Bassily %A Mehryar Mohri %A Ananda Theertha Suresh %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-open-problem-bassily22a %I PMLR %P 5638--5643 %U https://proceedings.mlr.press/v178/open-problem-bassily22a.html %V 178 %X The design of efficient differentially private (DP) learning algorithms with dimension-independent learning guarantees has been one of the central challenges in the field of privacy-preserving machine learning. Existing algorithms either suffer from weak generalization guarantees, restrictive model assumptions, or quite large computation cost. In non-private learning, dimension-independent generalization guarantees based on the notion of confidence margin were shown to be the most informative and useful learning guarantees. This motivates a systematic study of DP learning algorithms with confidence-margin generalization guarantees. A recent work has started exploring this direction in the context of linear and kernel-based classification as well as certain classes of neural networks (NNs). Despite showing several positive results, a number of fundamental questions are still open. We identify two major open problems related to DP margin-based learning algorithms. The first problem relates to the design of algorithms with more favorable computational cost. The second one pertains to the question of achieving margin guarantees for NNs under DP with no explicit dependence on the network size.
APA
Bassily, R., Mohri, M. & Suresh, A.T.. (2022). Open Problem: Better Differentially Private Learning Algorithms with Margin Guarantees. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:5638-5643 Available from https://proceedings.mlr.press/v178/open-problem-bassily22a.html.

Related Material