[edit]
Differentially Private Learning with Kernels
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):118-126, 2013.
Abstract
In this paper, we consider the problem of differentially private learning where access to the training features is through a kernel function only. Existing work on this problem is restricted to translation invariant kernels only, where (approximate) training features are available explicitly. In fact, for general class of kernel functions and in general setting of releasing different private predictor (\w^*), the problem is impossible to solve \citeCMS11. In this work, we relax the problem setting into three different easier but practical settings. In our first problem setting, we consider an interactive model where the user sends its test set to a trusted learner who sends back differentially private predictions over the test points. This setting is prevalent in modern online systems like search engines, ad engines etc. In the second model, the learner sends back a differentially private version of the optimal parameter vector \w^* but requires access to a small subset of unlabeled test set beforehand. This also is a practical setting that involves two parties interacting through trusted third party. Our third model is similar to the traditional model, where learner is oblivious to the test set and needs to send a differentially private version of \w^*, but the kernels are restricted to efficiently computable functions over low-dimensional vector spaces. For each of the models, we derive differentially private learning algorithms with provable “utlity” or error bounds. Moreover, we show that our methods can also be applied to the traditional setting of \cite Rubinstein09, CMS11. Here, our sample complexity bounds have only O(d^1/3) dependence on the dimensionality d while existing methods require O(d^1/2) samples to achieve same generalization error.