Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks

Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11163-11172, 2020.

Abstract

We study the problem of constructing black-box adversarial attacks, where no model information is revealed except for the feedback knowledge of the given inputs. To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions. However, these methods suffer from poor query efficiency since the employed searching directions are sampled randomly. To mitigate this issue, we formulate the goal of mounting efficient attacks as an optimization problem in which the adversary tries to fool the target model with a limited number of queries. Under such settings, the adversary has to select appropriate searching directions to reduce the number of model queries. By solving the efficient-attack problem, we find that we need to distill the knowledge in both the path of the adversarial examples and the path of the searching directions. Therefore, we propose a novel framework, dual-path distillation, that utilizes the feedback knowledge not only to craft adversarial examples but also to alter the searching directions to achieve efficient attacks. Experimental results suggest that our framework can significantly increase the query efficiency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20o, title = {Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks}, author = {Zhang, Yonggang and Li, Ya and Liu, Tongliang and Tian, Xinmei}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11163--11172}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20o/zhang20o.pdf}, url = {https://proceedings.mlr.press/v119/zhang20o.html}, abstract = {We study the problem of constructing black-box adversarial attacks, where no model information is revealed except for the feedback knowledge of the given inputs. To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions. However, these methods suffer from poor query efficiency since the employed searching directions are sampled randomly. To mitigate this issue, we formulate the goal of mounting efficient attacks as an optimization problem in which the adversary tries to fool the target model with a limited number of queries. Under such settings, the adversary has to select appropriate searching directions to reduce the number of model queries. By solving the efficient-attack problem, we find that we need to distill the knowledge in both the path of the adversarial examples and the path of the searching directions. Therefore, we propose a novel framework, dual-path distillation, that utilizes the feedback knowledge not only to craft adversarial examples but also to alter the searching directions to achieve efficient attacks. Experimental results suggest that our framework can significantly increase the query efficiency.} }
Endnote
%0 Conference Paper %T Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks %A Yonggang Zhang %A Ya Li %A Tongliang Liu %A Xinmei Tian %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20o %I PMLR %P 11163--11172 %U https://proceedings.mlr.press/v119/zhang20o.html %V 119 %X We study the problem of constructing black-box adversarial attacks, where no model information is revealed except for the feedback knowledge of the given inputs. To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions. However, these methods suffer from poor query efficiency since the employed searching directions are sampled randomly. To mitigate this issue, we formulate the goal of mounting efficient attacks as an optimization problem in which the adversary tries to fool the target model with a limited number of queries. Under such settings, the adversary has to select appropriate searching directions to reduce the number of model queries. By solving the efficient-attack problem, we find that we need to distill the knowledge in both the path of the adversarial examples and the path of the searching directions. Therefore, we propose a novel framework, dual-path distillation, that utilizes the feedback knowledge not only to craft adversarial examples but also to alter the searching directions to achieve efficient attacks. Experimental results suggest that our framework can significantly increase the query efficiency.
APA
Zhang, Y., Li, Y., Liu, T. & Tian, X.. (2020). Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11163-11172 Available from https://proceedings.mlr.press/v119/zhang20o.html.

Related Material