Adversarial Robustness via Runtime Masking and Cleansing

Yi-Hsuan Wu, Chia-Hung Yuan, Shan-Hung Wu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10399-10409, 2020.

Abstract

Deep neural networks are shown to be vulnerable to adversarial attacks. This motivates robust learning techniques, such as the adversarial training, whose goal is to learn a network that is robust against adversarial attacks. However, the sample complexity of robust learning can be significantly larger than that of “standard” learning. In this paper, we propose improving the adversarial robustness of a network by leveraging the potentially large test data seen at runtime. We devise a new defense method, called runtime masking and cleansing (RMC), that adapts the network at runtime before making a prediction to dynamically mask network gradients and cleanse the model of the non-robust features inevitably learned during the training process due to the size limit of the training set. We conduct experiments on real-world datasets and the results demonstrate the effectiveness of RMC empirically.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wu20f, title = {Adversarial Robustness via Runtime Masking and Cleansing}, author = {Wu, Yi-Hsuan and Yuan, Chia-Hung and Wu, Shan-Hung}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10399--10409}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wu20f/wu20f.pdf}, url = {https://proceedings.mlr.press/v119/wu20f.html}, abstract = {Deep neural networks are shown to be vulnerable to adversarial attacks. This motivates robust learning techniques, such as the adversarial training, whose goal is to learn a network that is robust against adversarial attacks. However, the sample complexity of robust learning can be significantly larger than that of “standard” learning. In this paper, we propose improving the adversarial robustness of a network by leveraging the potentially large test data seen at runtime. We devise a new defense method, called runtime masking and cleansing (RMC), that adapts the network at runtime before making a prediction to dynamically mask network gradients and cleanse the model of the non-robust features inevitably learned during the training process due to the size limit of the training set. We conduct experiments on real-world datasets and the results demonstrate the effectiveness of RMC empirically.} }
Endnote
%0 Conference Paper %T Adversarial Robustness via Runtime Masking and Cleansing %A Yi-Hsuan Wu %A Chia-Hung Yuan %A Shan-Hung Wu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wu20f %I PMLR %P 10399--10409 %U https://proceedings.mlr.press/v119/wu20f.html %V 119 %X Deep neural networks are shown to be vulnerable to adversarial attacks. This motivates robust learning techniques, such as the adversarial training, whose goal is to learn a network that is robust against adversarial attacks. However, the sample complexity of robust learning can be significantly larger than that of “standard” learning. In this paper, we propose improving the adversarial robustness of a network by leveraging the potentially large test data seen at runtime. We devise a new defense method, called runtime masking and cleansing (RMC), that adapts the network at runtime before making a prediction to dynamically mask network gradients and cleanse the model of the non-robust features inevitably learned during the training process due to the size limit of the training set. We conduct experiments on real-world datasets and the results demonstrate the effectiveness of RMC empirically.
APA
Wu, Y., Yuan, C. & Wu, S.. (2020). Adversarial Robustness via Runtime Masking and Cleansing. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10399-10409 Available from https://proceedings.mlr.press/v119/wu20f.html.

Related Material