Scalable Differential Privacy with Certified Robustness in Adversarial Learning

Hai Phan, My T. Thai, Han Hu, Ruoming Jin, Tong Sun, Dejing Dou
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7683-7694, 2020.

Abstract

In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in adversarial learning for deep neural networks (DNNs), with certified robustness to adversarial examples. By leveraging the sequential composition theory in DP, we randomize both input and latent spaces to strengthen our certified robustness bounds. To address the trade-off among model utility, privacy loss, and robustness, we design an original adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. A new stochastic batch training is proposed to apply our mechanism on large DNNs and datasets, by bypassing the vanilla iterative batch-by-batch training in DP DNNs. An end-to-end theoretical analysis and evaluations show that our mechanism notably improves the robustness and scalability of DP DNNs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-phan20a, title = {Scalable Differential Privacy with Certified Robustness in Adversarial Learning}, author = {Phan, Hai and Thai, My T. and Hu, Han and Jin, Ruoming and Sun, Tong and Dou, Dejing}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7683--7694}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/phan20a/phan20a.pdf}, url = {https://proceedings.mlr.press/v119/phan20a.html}, abstract = {In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in adversarial learning for deep neural networks (DNNs), with certified robustness to adversarial examples. By leveraging the sequential composition theory in DP, we randomize both input and latent spaces to strengthen our certified robustness bounds. To address the trade-off among model utility, privacy loss, and robustness, we design an original adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. A new stochastic batch training is proposed to apply our mechanism on large DNNs and datasets, by bypassing the vanilla iterative batch-by-batch training in DP DNNs. An end-to-end theoretical analysis and evaluations show that our mechanism notably improves the robustness and scalability of DP DNNs.} }
Endnote
%0 Conference Paper %T Scalable Differential Privacy with Certified Robustness in Adversarial Learning %A Hai Phan %A My T. Thai %A Han Hu %A Ruoming Jin %A Tong Sun %A Dejing Dou %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-phan20a %I PMLR %P 7683--7694 %U https://proceedings.mlr.press/v119/phan20a.html %V 119 %X In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in adversarial learning for deep neural networks (DNNs), with certified robustness to adversarial examples. By leveraging the sequential composition theory in DP, we randomize both input and latent spaces to strengthen our certified robustness bounds. To address the trade-off among model utility, privacy loss, and robustness, we design an original adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. A new stochastic batch training is proposed to apply our mechanism on large DNNs and datasets, by bypassing the vanilla iterative batch-by-batch training in DP DNNs. An end-to-end theoretical analysis and evaluations show that our mechanism notably improves the robustness and scalability of DP DNNs.
APA
Phan, H., Thai, M.T., Hu, H., Jin, R., Sun, T. & Dou, D.. (2020). Scalable Differential Privacy with Certified Robustness in Adversarial Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7683-7694 Available from https://proceedings.mlr.press/v119/phan20a.html.

Related Material