Adversarial Robustness for Code

Pavol Bielik, Martin Vechev
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:896-907, 2020.

Abstract

Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work, we explore this issue by: (i) instantiating adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are vulnerable to adversarial attacks, and (iii) combining existing and novel techniques to improve robustness while preserving high accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-bielik20a, title = {Adversarial Robustness for Code}, author = {Bielik, Pavol and Vechev, Martin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {896--907}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/bielik20a/bielik20a.pdf}, url = {https://proceedings.mlr.press/v119/bielik20a.html}, abstract = {Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work, we explore this issue by: (i) instantiating adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are vulnerable to adversarial attacks, and (iii) combining existing and novel techniques to improve robustness while preserving high accuracy.} }
Endnote
%0 Conference Paper %T Adversarial Robustness for Code %A Pavol Bielik %A Martin Vechev %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-bielik20a %I PMLR %P 896--907 %U https://proceedings.mlr.press/v119/bielik20a.html %V 119 %X Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work, we explore this issue by: (i) instantiating adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are vulnerable to adversarial attacks, and (iii) combining existing and novel techniques to improve robustness while preserving high accuracy.
APA
Bielik, P. & Vechev, M.. (2020). Adversarial Robustness for Code. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:896-907 Available from https://proceedings.mlr.press/v119/bielik20a.html.

Related Material