[edit]
Perturb-and-Project: Differentially Private Similarities and Marginals
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:9161-9179, 2024.
Abstract
We revisit the objective perturbations framework for differential privacy where noise is added to the input A∈S and the result is then projected back to the space of admissible datasets S. Through this framework, we first design novel efficient algorithms to privately release pair-wise cosine similarities. Second, we derive a novel algorithm to compute k-way marginal queries over n features. Prior work could achieve comparable guarantees only for k even. Furthermore, we extend our results to t-sparse datasets, where our efficient algorithms yields novel, stronger guarantees whenever t≤n5/6/logn. Finally, we provide a theoretical perspective on why fast input perturbation algorithms works well in practice. The key technical ingredients behind our results are tight sum-of-squares certificates upper bounding the Gaussian complexity of sets of solutions.