Robustness to Programmable String Transformations via Augmented Abstract Training

Yuhao Zhang, Aws Albarghouthi, Loris D’Antoni
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11023-11032, 2020.

Abstract

Deep neural networks for natural language processing tasks are vulnerable to adversarial input perturbations. In this paper, we present a versatile language for programmatically specifying string transformations—e.g., insertions, deletions, substitutions, swaps, etc.—that are relevant to the task at hand. We then present an approach to adversarially training models that are robust to such user-defined string transformations. Our approach combines the advantages of search-based techniques for adversarial training with abstraction-based techniques. Specifically, we show how to decompose a set of user-defined string transformations into two component specifications, one that benefits from search and another from abstraction. We use our technique to train models on the AG and SST2 datasets and show that the resulting models are robust to combinations of user-defined transformations mimicking spelling mistakes and other meaning-preserving transformations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20b, title = {Robustness to Programmable String Transformations via Augmented Abstract Training}, author = {Zhang, Yuhao and Albarghouthi, Aws and D'Antoni, Loris}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11023--11032}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20b/zhang20b.pdf}, url = {http://proceedings.mlr.press/v119/zhang20b.html}, abstract = {Deep neural networks for natural language processing tasks are vulnerable to adversarial input perturbations. In this paper, we present a versatile language for programmatically specifying string transformations—e.g., insertions, deletions, substitutions, swaps, etc.—that are relevant to the task at hand. We then present an approach to adversarially training models that are robust to such user-defined string transformations. Our approach combines the advantages of search-based techniques for adversarial training with abstraction-based techniques. Specifically, we show how to decompose a set of user-defined string transformations into two component specifications, one that benefits from search and another from abstraction. We use our technique to train models on the AG and SST2 datasets and show that the resulting models are robust to combinations of user-defined transformations mimicking spelling mistakes and other meaning-preserving transformations.} }
Endnote
%0 Conference Paper %T Robustness to Programmable String Transformations via Augmented Abstract Training %A Yuhao Zhang %A Aws Albarghouthi %A Loris D’Antoni %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20b %I PMLR %P 11023--11032 %U http://proceedings.mlr.press/v119/zhang20b.html %V 119 %X Deep neural networks for natural language processing tasks are vulnerable to adversarial input perturbations. In this paper, we present a versatile language for programmatically specifying string transformations—e.g., insertions, deletions, substitutions, swaps, etc.—that are relevant to the task at hand. We then present an approach to adversarially training models that are robust to such user-defined string transformations. Our approach combines the advantages of search-based techniques for adversarial training with abstraction-based techniques. Specifically, we show how to decompose a set of user-defined string transformations into two component specifications, one that benefits from search and another from abstraction. We use our technique to train models on the AG and SST2 datasets and show that the resulting models are robust to combinations of user-defined transformations mimicking spelling mistakes and other meaning-preserving transformations.
APA
Zhang, Y., Albarghouthi, A. & D’Antoni, L.. (2020). Robustness to Programmable String Transformations via Augmented Abstract Training. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11023-11032 Available from http://proceedings.mlr.press/v119/zhang20b.html.

Related Material