Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks

Yiwei Lu, Gautam Kamath, Yaoliang Yu
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:22856-22879, 2023.

Abstract

Indiscriminate data poisoning attacks aim to decrease a model’s test accuracy by injecting a small amount of corrupted training data. Despite significant interest, existing attacks remain relatively ineffective against modern machine learning (ML) architectures. In this work, we introduce the notion of model poisoning reachability as a technical tool to explore the intrinsic limits of data poisoning attacks towards target parameters (i.e., model-targeted attacks). We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models: data poisoning attacks can achieve certain target parameters only when the poisoning ratio exceeds our threshold. Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing indiscriminate data poisoning baselines over a range of datasets and models. Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-lu23e, title = {Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks}, author = {Lu, Yiwei and Kamath, Gautam and Yu, Yaoliang}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {22856--22879}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/lu23e/lu23e.pdf}, url = {https://proceedings.mlr.press/v202/lu23e.html}, abstract = {Indiscriminate data poisoning attacks aim to decrease a model’s test accuracy by injecting a small amount of corrupted training data. Despite significant interest, existing attacks remain relatively ineffective against modern machine learning (ML) architectures. In this work, we introduce the notion of model poisoning reachability as a technical tool to explore the intrinsic limits of data poisoning attacks towards target parameters (i.e., model-targeted attacks). We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models: data poisoning attacks can achieve certain target parameters only when the poisoning ratio exceeds our threshold. Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing indiscriminate data poisoning baselines over a range of datasets and models. Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.} }
Endnote
%0 Conference Paper %T Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks %A Yiwei Lu %A Gautam Kamath %A Yaoliang Yu %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-lu23e %I PMLR %P 22856--22879 %U https://proceedings.mlr.press/v202/lu23e.html %V 202 %X Indiscriminate data poisoning attacks aim to decrease a model’s test accuracy by injecting a small amount of corrupted training data. Despite significant interest, existing attacks remain relatively ineffective against modern machine learning (ML) architectures. In this work, we introduce the notion of model poisoning reachability as a technical tool to explore the intrinsic limits of data poisoning attacks towards target parameters (i.e., model-targeted attacks). We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models: data poisoning attacks can achieve certain target parameters only when the poisoning ratio exceeds our threshold. Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing indiscriminate data poisoning baselines over a range of datasets and models. Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.
APA
Lu, Y., Kamath, G. & Yu, Y.. (2023). Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:22856-22879 Available from https://proceedings.mlr.press/v202/lu23e.html.

Related Material