Improving Adversarial Transferability via Decision Boundary Adaptation

Jiayu Zhang, Zhiyu Zhu, Zhibo Jin, Xinyi Wang, Huaming Chen, Kim-Kwang Raymond Choo
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:4943-4958, 2025.

Abstract

Black-box attacks play a pivotal role in adversarial attacks. However, existing approaches often focus predominantly on attacking from a data-centric perspective, neglecting crucial aspects of the models. To address this issue, we propose a novel approach in this paper, coined Decision Boundary Adaptation (DBA). Our approach innovatively adopts a model-centric viewpoint, leveraging operations on the model to attain properties that enhance transferability. We observe that a flatter curvature of the statistical manifold, influenced by both samples and model parameters, leads to stronger transferability of the adversarial attacks. To leverage this, we introduce the concept of local flatness, providing an evaluation method for local flatness property along with a detailed mathematical proof. Additionally, we demonstrate a consistent relationship between local flatness, the model’s decision boundary, and the gradient descent process, showing how flatness can be achieved through gradient descent at the model parameter level. Through extensive evaluation using state-of-the-art adversarial attack techniques, our DBA approach significantly enhances the black-box attack capabilities of all the tested adversarial attack methods. The implementation of our method is available at https://github.com/LMBTough/DBA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-zhang25e, title = {Improving Adversarial Transferability via Decision Boundary Adaptation}, author = {Zhang, Jiayu and Zhu, Zhiyu and Jin, Zhibo and Wang, Xinyi and Chen, Huaming and Choo, Kim-Kwang Raymond}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {4943--4958}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/zhang25e/zhang25e.pdf}, url = {https://proceedings.mlr.press/v286/zhang25e.html}, abstract = {Black-box attacks play a pivotal role in adversarial attacks. However, existing approaches often focus predominantly on attacking from a data-centric perspective, neglecting crucial aspects of the models. To address this issue, we propose a novel approach in this paper, coined Decision Boundary Adaptation (DBA). Our approach innovatively adopts a model-centric viewpoint, leveraging operations on the model to attain properties that enhance transferability. We observe that a flatter curvature of the statistical manifold, influenced by both samples and model parameters, leads to stronger transferability of the adversarial attacks. To leverage this, we introduce the concept of local flatness, providing an evaluation method for local flatness property along with a detailed mathematical proof. Additionally, we demonstrate a consistent relationship between local flatness, the model’s decision boundary, and the gradient descent process, showing how flatness can be achieved through gradient descent at the model parameter level. Through extensive evaluation using state-of-the-art adversarial attack techniques, our DBA approach significantly enhances the black-box attack capabilities of all the tested adversarial attack methods. The implementation of our method is available at https://github.com/LMBTough/DBA.} }
Endnote
%0 Conference Paper %T Improving Adversarial Transferability via Decision Boundary Adaptation %A Jiayu Zhang %A Zhiyu Zhu %A Zhibo Jin %A Xinyi Wang %A Huaming Chen %A Kim-Kwang Raymond Choo %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-zhang25e %I PMLR %P 4943--4958 %U https://proceedings.mlr.press/v286/zhang25e.html %V 286 %X Black-box attacks play a pivotal role in adversarial attacks. However, existing approaches often focus predominantly on attacking from a data-centric perspective, neglecting crucial aspects of the models. To address this issue, we propose a novel approach in this paper, coined Decision Boundary Adaptation (DBA). Our approach innovatively adopts a model-centric viewpoint, leveraging operations on the model to attain properties that enhance transferability. We observe that a flatter curvature of the statistical manifold, influenced by both samples and model parameters, leads to stronger transferability of the adversarial attacks. To leverage this, we introduce the concept of local flatness, providing an evaluation method for local flatness property along with a detailed mathematical proof. Additionally, we demonstrate a consistent relationship between local flatness, the model’s decision boundary, and the gradient descent process, showing how flatness can be achieved through gradient descent at the model parameter level. Through extensive evaluation using state-of-the-art adversarial attack techniques, our DBA approach significantly enhances the black-box attack capabilities of all the tested adversarial attack methods. The implementation of our method is available at https://github.com/LMBTough/DBA.
APA
Zhang, J., Zhu, Z., Jin, Z., Wang, X., Chen, H. & Choo, K.R.. (2025). Improving Adversarial Transferability via Decision Boundary Adaptation. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:4943-4958 Available from https://proceedings.mlr.press/v286/zhang25e.html.

Related Material