DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

Bao Wang, Quanquan Gu, March Boedihardjo, Lingxiao Wang, Farzin Barekat, Stanley J. Osher
Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR 107:328-351, 2020.

Abstract

Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but in practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs. The code is available at \url{https://github.com/BaoWangMath/DP-LSSGD}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v107-wang20a, title = {{DP-LSSGD: A} Stochastic Optimization Method to Lift the Utility in Privacy-Preserving {ERM}}, author = {Wang, Bao and Gu, Quanquan and Boedihardjo, March and Wang, Lingxiao and Barekat, Farzin and Osher, Stanley J.}, booktitle = {Proceedings of The First Mathematical and Scientific Machine Learning Conference}, pages = {328--351}, year = {2020}, editor = {Lu, Jianfeng and Ward, Rachel}, volume = {107}, series = {Proceedings of Machine Learning Research}, month = {20--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v107/wang20a/wang20a.pdf}, url = {https://proceedings.mlr.press/v107/wang20a.html}, abstract = {Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but in practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs. The code is available at \url{https://github.com/BaoWangMath/DP-LSSGD}.} }
Endnote
%0 Conference Paper %T DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM %A Bao Wang %A Quanquan Gu %A March Boedihardjo %A Lingxiao Wang %A Farzin Barekat %A Stanley J. Osher %B Proceedings of The First Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2020 %E Jianfeng Lu %E Rachel Ward %F pmlr-v107-wang20a %I PMLR %P 328--351 %U https://proceedings.mlr.press/v107/wang20a.html %V 107 %X Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but in practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs. The code is available at \url{https://github.com/BaoWangMath/DP-LSSGD}.
APA
Wang, B., Gu, Q., Boedihardjo, M., Wang, L., Barekat, F. & Osher, S.J.. (2020). DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM. Proceedings of The First Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 107:328-351 Available from https://proceedings.mlr.press/v107/wang20a.html.

Related Material