[edit]
Principal component analysis in the stochastic differential privacy model
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1110-1119, 2021.
Abstract
In this paper, we study the differentially private Principal Component Analysis (PCA) problem in stochastic optimization settings. We first propose a new stochastic gradient perturbation PCA mechanism (DP-SPCA) for the calculation of the right singular subspace to achieve (ϵ,δ)-differential privacy. For achieving a better utility guarantee and performance, we then present a new differential privacy stochastic variance reduction mechanism (DP-VRPCA) with gradient perturbation for PCA. To the best of our knowledge, this is the first work of stochastic gradient perturbation for (ϵ,δ)-differentially private PCA. We also compare the proposed algorithms with existing state-of-the-art methods, and experiments on real-world datasets and on classification tasks confirm the improved theoretical guarantees of our algorithms.