[edit]
Certified private data release for sparse Lipschitz functions
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1396-1404, 2024.
Abstract
As machine learning has become more relevant for everyday applications, a natural requirement is the protection of the privacy of the training data. When the relevant learning questions are unknown in advance, or hyper-parameter tuning plays a central role, one solution is to release a differentially private synthetic data set that leads to similar conclusions as the original training data. In this work, we introduce an algorithm that enjoys fast rates for the utility loss for sparse Lipschitz queries. Furthermore, we show how to obtain a certificate for the utility loss for a large class of algorithms.