[edit]
Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:819-850, 2013.
Abstract
We design differentially private algorithms for statistical model selection. Given a data set and a large, discrete collection of “models”, each of which is a family of probability distributions, the goal is to determine the model that best “fits” the data. This is a basic problem in many areas of statistics and machine learning. We consider settings in which there is a well-defined answer, in the following sense: Suppose that there is a \emphnonprivate model selection procedure f, which is the reference to which we compare our performance. Our differentially private algorithms output the correct value f(D) whenever f is \emphstable on the input data set D. We work with two notions, \emphperturbation stability and \emphsub-sampling stability. We give two classes of results: generic ones, that apply to any function with discrete output set; and specific algorithms for the problem of sparse linear regression. The algorithms we describe are efficient and in some cases match the optimal \emphnon-private asymptotic sample complexity. Our algorithms for sparse linear regression require analyzing the stability properties of the popular LASSO estimator. We give sufficient conditions for the LASSO estimator to be robust to small changes in the data set, and show that these conditions hold with high probability under essentially the same stochastic assumptions that are used in the literature to analyze convergence of the LASSO.