A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks

Boqi Li, Weiwei Liu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:28309-28342, 2024.

Abstract

The rising threat of backdoor poisoning attacks (BPAs) on Deep Neural Networks (DNNs) has become a significant concern in recent years. In such attacks, the adversaries strategically target a specific class and generate a poisoned training set. The neural network (NN), well-trained on the poisoned training set, is able to predict any input with the trigger pattern as the targeted label, while maintaining accurate outputs for clean inputs. However, why the BPAs work remains less explored. To fill this gap, we employ a dirty-label attack and conduct a detailed analysis of BPAs in a two-layer convolutional neural network. We provide theoretical insights and results on the effectiveness of BPAs. Our experimental results on two real-world datasets validate our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24at, title = {A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks}, author = {Li, Boqi and Liu, Weiwei}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {28309--28342}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24at/li24at.pdf}, url = {https://proceedings.mlr.press/v235/li24at.html}, abstract = {The rising threat of backdoor poisoning attacks (BPAs) on Deep Neural Networks (DNNs) has become a significant concern in recent years. In such attacks, the adversaries strategically target a specific class and generate a poisoned training set. The neural network (NN), well-trained on the poisoned training set, is able to predict any input with the trigger pattern as the targeted label, while maintaining accurate outputs for clean inputs. However, why the BPAs work remains less explored. To fill this gap, we employ a dirty-label attack and conduct a detailed analysis of BPAs in a two-layer convolutional neural network. We provide theoretical insights and results on the effectiveness of BPAs. Our experimental results on two real-world datasets validate our theoretical findings.} }
Endnote
%0 Conference Paper %T A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks %A Boqi Li %A Weiwei Liu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24at %I PMLR %P 28309--28342 %U https://proceedings.mlr.press/v235/li24at.html %V 235 %X The rising threat of backdoor poisoning attacks (BPAs) on Deep Neural Networks (DNNs) has become a significant concern in recent years. In such attacks, the adversaries strategically target a specific class and generate a poisoned training set. The neural network (NN), well-trained on the poisoned training set, is able to predict any input with the trigger pattern as the targeted label, while maintaining accurate outputs for clean inputs. However, why the BPAs work remains less explored. To fill this gap, we employ a dirty-label attack and conduct a detailed analysis of BPAs in a two-layer convolutional neural network. We provide theoretical insights and results on the effectiveness of BPAs. Our experimental results on two real-world datasets validate our theoretical findings.
APA
Li, B. & Liu, W.. (2024). A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:28309-28342 Available from https://proceedings.mlr.press/v235/li24at.html.

Related Material