MFA: Multi-layer Feature-aware Attack for Object Detection

Wen Chen, Yushan Zhang, Zhiheng Li, Yuehuan Wang
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:336-346, 2023.

Abstract

Physical adversarial attacks can mislead detectors in real-world scenarios and have attracted increasing attention. However, most existing works manipulate the detector’s final outputs as attack targets while ignoring the inherent characteristics of objects. This can result in attacks being trapped in model-specific local optima and reduced transferability. To address this issue, we propose a Multi-layer Feature-aware Attack (MFA) that considers the importance of multi-layer features and disrupts critical object-aware features that dominate decision-making across different models. Specifically, we leverage the location and category information of detector outputs to assign attribution scores to different feature layers. Then, we weight each feature according to their attribution results and design a pixel-level loss function in the opposite optimized direction of object detection to generate adversarial camouflages. We conduct extensive experiments in both digital and physical worlds on ten outstanding detection models and demonstrate the superior performance of MFA in terms of attacking capability and transferability. Our code is available at: \url{https://github.com/ChenWen1997/MFA}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-chen23d, title = {{MFA}: Multi-layer Feature-aware Attack for Object Detection}, author = {Chen, Wen and Zhang, Yushan and Li, Zhiheng and Wang, Yuehuan}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {336--346}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/chen23d/chen23d.pdf}, url = {https://proceedings.mlr.press/v216/chen23d.html}, abstract = {Physical adversarial attacks can mislead detectors in real-world scenarios and have attracted increasing attention. However, most existing works manipulate the detector’s final outputs as attack targets while ignoring the inherent characteristics of objects. This can result in attacks being trapped in model-specific local optima and reduced transferability. To address this issue, we propose a Multi-layer Feature-aware Attack (MFA) that considers the importance of multi-layer features and disrupts critical object-aware features that dominate decision-making across different models. Specifically, we leverage the location and category information of detector outputs to assign attribution scores to different feature layers. Then, we weight each feature according to their attribution results and design a pixel-level loss function in the opposite optimized direction of object detection to generate adversarial camouflages. We conduct extensive experiments in both digital and physical worlds on ten outstanding detection models and demonstrate the superior performance of MFA in terms of attacking capability and transferability. Our code is available at: \url{https://github.com/ChenWen1997/MFA}.} }
Endnote
%0 Conference Paper %T MFA: Multi-layer Feature-aware Attack for Object Detection %A Wen Chen %A Yushan Zhang %A Zhiheng Li %A Yuehuan Wang %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-chen23d %I PMLR %P 336--346 %U https://proceedings.mlr.press/v216/chen23d.html %V 216 %X Physical adversarial attacks can mislead detectors in real-world scenarios and have attracted increasing attention. However, most existing works manipulate the detector’s final outputs as attack targets while ignoring the inherent characteristics of objects. This can result in attacks being trapped in model-specific local optima and reduced transferability. To address this issue, we propose a Multi-layer Feature-aware Attack (MFA) that considers the importance of multi-layer features and disrupts critical object-aware features that dominate decision-making across different models. Specifically, we leverage the location and category information of detector outputs to assign attribution scores to different feature layers. Then, we weight each feature according to their attribution results and design a pixel-level loss function in the opposite optimized direction of object detection to generate adversarial camouflages. We conduct extensive experiments in both digital and physical worlds on ten outstanding detection models and demonstrate the superior performance of MFA in terms of attacking capability and transferability. Our code is available at: \url{https://github.com/ChenWen1997/MFA}.
APA
Chen, W., Zhang, Y., Li, Z. & Wang, Y.. (2023). MFA: Multi-layer Feature-aware Attack for Object Detection. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:336-346 Available from https://proceedings.mlr.press/v216/chen23d.html.

Related Material