IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

Linshan Hou, Ruili Feng, Zhongyun Hua, Wei Luo, Leo Yu Zhang, Yiming Li
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:18992-19022, 2024.

Abstract

Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a ‘firewall’ to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks. Codes are available at https://github.com/THUYimingLi/BackdoorBox.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-hou24a, title = {{IBD}-{PSC}: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency}, author = {Hou, Linshan and Feng, Ruili and Hua, Zhongyun and Luo, Wei and Zhang, Leo Yu and Li, Yiming}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {18992--19022}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/hou24a/hou24a.pdf}, url = {https://proceedings.mlr.press/v235/hou24a.html}, abstract = {Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a ‘firewall’ to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks. Codes are available at https://github.com/THUYimingLi/BackdoorBox.} }
Endnote
%0 Conference Paper %T IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency %A Linshan Hou %A Ruili Feng %A Zhongyun Hua %A Wei Luo %A Leo Yu Zhang %A Yiming Li %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-hou24a %I PMLR %P 18992--19022 %U https://proceedings.mlr.press/v235/hou24a.html %V 235 %X Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a ‘firewall’ to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks. Codes are available at https://github.com/THUYimingLi/BackdoorBox.
APA
Hou, L., Feng, R., Hua, Z., Luo, W., Zhang, L.Y. & Li, Y.. (2024). IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:18992-19022 Available from https://proceedings.mlr.press/v235/hou24a.html.

Related Material