On the (In)feasibility of ML Backdoor Detection as an Hypothesis Testing Problem

Georg Pichler, Marco Romanelli, Divya Prakash Manivannan, Prashanth Krishnamurthy, Farshad khorrami, Siddharth Garg
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4051-4059, 2024.

Abstract

We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it to analyze the feasibility of such problems, providing evidence for the utility and applicability of our definition. The main contributions of this work are an impossibility result and an achievability result for backdoor detection. We show a no-free-lunch theorem, proving that universal (adversary-unaware) backdoor detection is impossible, except for very small alphabet sizes. Thus, we argue, that backdoor detection methods need to be either explicitly, or implicitly adversary-aware. However, our work does not imply that backdoor detection cannot work in specific scenarios, as evidenced by successful backdoor detection methods in the scientific literature. Furthermore, we connect our definition to the probably approximately correct (PAC) learnability of the out-of-distribution detection problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-pichler24a, title = {On the (In)feasibility of {ML} Backdoor Detection as an Hypothesis Testing Problem}, author = {Pichler, Georg and Romanelli, Marco and Prakash Manivannan, Divya and Krishnamurthy, Prashanth and khorrami, Farshad and Garg, Siddharth}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4051--4059}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/pichler24a/pichler24a.pdf}, url = {https://proceedings.mlr.press/v238/pichler24a.html}, abstract = {We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it to analyze the feasibility of such problems, providing evidence for the utility and applicability of our definition. The main contributions of this work are an impossibility result and an achievability result for backdoor detection. We show a no-free-lunch theorem, proving that universal (adversary-unaware) backdoor detection is impossible, except for very small alphabet sizes. Thus, we argue, that backdoor detection methods need to be either explicitly, or implicitly adversary-aware. However, our work does not imply that backdoor detection cannot work in specific scenarios, as evidenced by successful backdoor detection methods in the scientific literature. Furthermore, we connect our definition to the probably approximately correct (PAC) learnability of the out-of-distribution detection problem.} }
Endnote
%0 Conference Paper %T On the (In)feasibility of ML Backdoor Detection as an Hypothesis Testing Problem %A Georg Pichler %A Marco Romanelli %A Divya Prakash Manivannan %A Prashanth Krishnamurthy %A Farshad khorrami %A Siddharth Garg %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-pichler24a %I PMLR %P 4051--4059 %U https://proceedings.mlr.press/v238/pichler24a.html %V 238 %X We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it to analyze the feasibility of such problems, providing evidence for the utility and applicability of our definition. The main contributions of this work are an impossibility result and an achievability result for backdoor detection. We show a no-free-lunch theorem, proving that universal (adversary-unaware) backdoor detection is impossible, except for very small alphabet sizes. Thus, we argue, that backdoor detection methods need to be either explicitly, or implicitly adversary-aware. However, our work does not imply that backdoor detection cannot work in specific scenarios, as evidenced by successful backdoor detection methods in the scientific literature. Furthermore, we connect our definition to the probably approximately correct (PAC) learnability of the out-of-distribution detection problem.
APA
Pichler, G., Romanelli, M., Prakash Manivannan, D., Krishnamurthy, P., khorrami, F. & Garg, S.. (2024). On the (In)feasibility of ML Backdoor Detection as an Hypothesis Testing Problem. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4051-4059 Available from https://proceedings.mlr.press/v238/pichler24a.html.

Related Material