Evasion Attacks Against Bayesian Predictive Models

Pablo G. Arce, Roi Naveiro, David Ríos Insua
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:184-202, 2025.

Abstract

There is an increasing interest in analyzing the behavior of machine learning systems against adversarial attacks. However, most of the research in adversarial machine learning has focused on studying weaknesses against evasion or poisoning attacks to predictive models in classical setups, with the susceptibility of Bayesian predictive models to attacks remaining underexplored. This paper introduces a general methodology for designing optimal evasion attacks against such models. We investigate two adversarial objectives: perturbing specific point predictions and altering the entire posterior predictive distribution. For both scenarios, we propose novel gradient-based attacks and study their implementation and properties in various computational setups.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-arce25a, title = {Evasion Attacks Against Bayesian Predictive Models}, author = {Arce, Pablo G. and Naveiro, Roi and Insua, David R\'{i}os}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {184--202}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/arce25a/arce25a.pdf}, url = {https://proceedings.mlr.press/v286/arce25a.html}, abstract = {There is an increasing interest in analyzing the behavior of machine learning systems against adversarial attacks. However, most of the research in adversarial machine learning has focused on studying weaknesses against evasion or poisoning attacks to predictive models in classical setups, with the susceptibility of Bayesian predictive models to attacks remaining underexplored. This paper introduces a general methodology for designing optimal evasion attacks against such models. We investigate two adversarial objectives: perturbing specific point predictions and altering the entire posterior predictive distribution. For both scenarios, we propose novel gradient-based attacks and study their implementation and properties in various computational setups.} }
Endnote
%0 Conference Paper %T Evasion Attacks Against Bayesian Predictive Models %A Pablo G. Arce %A Roi Naveiro %A David Ríos Insua %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-arce25a %I PMLR %P 184--202 %U https://proceedings.mlr.press/v286/arce25a.html %V 286 %X There is an increasing interest in analyzing the behavior of machine learning systems against adversarial attacks. However, most of the research in adversarial machine learning has focused on studying weaknesses against evasion or poisoning attacks to predictive models in classical setups, with the susceptibility of Bayesian predictive models to attacks remaining underexplored. This paper introduces a general methodology for designing optimal evasion attacks against such models. We investigate two adversarial objectives: perturbing specific point predictions and altering the entire posterior predictive distribution. For both scenarios, we propose novel gradient-based attacks and study their implementation and properties in various computational setups.
APA
Arce, P.G., Naveiro, R. & Insua, D.R.. (2025). Evasion Attacks Against Bayesian Predictive Models. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:184-202 Available from https://proceedings.mlr.press/v286/arce25a.html.

Related Material