Certified Robustness Against Natural Language Attacks by Causal Intervention

Haiteng Zhao, Chang Ma, Xinshuai Dong, Anh Tuan Luu, Zhi-Hong Deng, Hanwang Zhang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26958-26970, 2022.

Abstract

Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples. This paper follows a causal perspective to look into the adversarial vulnerability and proposes Causal Intervention by Semantic Smoothing (CISS), a novel framework towards robustness against natural language attacks. Instead of merely fitting observational data, CISS learns causal effects p(y|do(x)) by smoothing in the latent semantic space to make robust predictions, which scales to deep architectures and avoids tedious construction of noise customized for specific attacks. CISS is provably robust against word substitution attacks, as well as empirically robust even when perturbations are strengthened by unknown attack algorithms. For example, on YELP, CISS surpasses the runner-up by 6.8% in terms of certified robustness against word substitutions, and achieves 80.7% empirical robustness when syntactic attacks are integrated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhao22g, title = {Certified Robustness Against Natural Language Attacks by Causal Intervention}, author = {Zhao, Haiteng and Ma, Chang and Dong, Xinshuai and Luu, Anh Tuan and Deng, Zhi-Hong and Zhang, Hanwang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {26958--26970}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhao22g/zhao22g.pdf}, url = {https://proceedings.mlr.press/v162/zhao22g.html}, abstract = {Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples. This paper follows a causal perspective to look into the adversarial vulnerability and proposes Causal Intervention by Semantic Smoothing (CISS), a novel framework towards robustness against natural language attacks. Instead of merely fitting observational data, CISS learns causal effects p(y|do(x)) by smoothing in the latent semantic space to make robust predictions, which scales to deep architectures and avoids tedious construction of noise customized for specific attacks. CISS is provably robust against word substitution attacks, as well as empirically robust even when perturbations are strengthened by unknown attack algorithms. For example, on YELP, CISS surpasses the runner-up by 6.8% in terms of certified robustness against word substitutions, and achieves 80.7% empirical robustness when syntactic attacks are integrated.} }
Endnote
%0 Conference Paper %T Certified Robustness Against Natural Language Attacks by Causal Intervention %A Haiteng Zhao %A Chang Ma %A Xinshuai Dong %A Anh Tuan Luu %A Zhi-Hong Deng %A Hanwang Zhang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhao22g %I PMLR %P 26958--26970 %U https://proceedings.mlr.press/v162/zhao22g.html %V 162 %X Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples. This paper follows a causal perspective to look into the adversarial vulnerability and proposes Causal Intervention by Semantic Smoothing (CISS), a novel framework towards robustness against natural language attacks. Instead of merely fitting observational data, CISS learns causal effects p(y|do(x)) by smoothing in the latent semantic space to make robust predictions, which scales to deep architectures and avoids tedious construction of noise customized for specific attacks. CISS is provably robust against word substitution attacks, as well as empirically robust even when perturbations are strengthened by unknown attack algorithms. For example, on YELP, CISS surpasses the runner-up by 6.8% in terms of certified robustness against word substitutions, and achieves 80.7% empirical robustness when syntactic attacks are integrated.
APA
Zhao, H., Ma, C., Dong, X., Luu, A.T., Deng, Z. & Zhang, H.. (2022). Certified Robustness Against Natural Language Attacks by Causal Intervention. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:26958-26970 Available from https://proceedings.mlr.press/v162/zhao22g.html.

Related Material