Adversarial Inputs for Linear Algebra Backends

Jonas Möller, Lukas Pirch, Felix Weissberg, Sebastian Baunsgaard, Thorsten Eisenhofer, Konrad Rieck
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:44615-44626, 2025.

Abstract

Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce Chimera examples, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-moller25a, title = {Adversarial Inputs for Linear Algebra Backends}, author = {M\"{o}ller, Jonas and Pirch, Lukas and Weissberg, Felix and Baunsgaard, Sebastian and Eisenhofer, Thorsten and Rieck, Konrad}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {44615--44626}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/moller25a/moller25a.pdf}, url = {https://proceedings.mlr.press/v267/moller25a.html}, abstract = {Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce Chimera examples, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.} }
Endnote
%0 Conference Paper %T Adversarial Inputs for Linear Algebra Backends %A Jonas Möller %A Lukas Pirch %A Felix Weissberg %A Sebastian Baunsgaard %A Thorsten Eisenhofer %A Konrad Rieck %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-moller25a %I PMLR %P 44615--44626 %U https://proceedings.mlr.press/v267/moller25a.html %V 267 %X Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce Chimera examples, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.
APA
Möller, J., Pirch, L., Weissberg, F., Baunsgaard, S., Eisenhofer, T. & Rieck, K.. (2025). Adversarial Inputs for Linear Algebra Backends. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:44615-44626 Available from https://proceedings.mlr.press/v267/moller25a.html.

Related Material