[edit]
TtBA: Two-third Bridge Approach for Decision-Based Adversarial Attack
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:65918-65935, 2025.
Abstract
A key challenge in black-box adversarial attacks is the high query complexity in hard-label settings, where only the top-1 predicted label from the target deep model is accessible. In this paper, we propose a novel normal-vector-based method called Two-third Bridge Attack (TtBA). A innovative bridge direction is introduced which is a weighted combination of the current unit perturbation direction and its unit normal vector, controlled by a weight parameter $k$. We further use binary search to identify $k=k_\text{bridge}$, which has identical decision boundary as the current direction. Notably, we observe that $k=2/3 k_\text{bridge}$ yields a near-optimal perturbation direction, ensuring the stealthiness of the attack. In addition, we investigate the critical importance of local optima during the perturbation direction optimization process and propose a simple and effective approach to detect and escape such local optima. Experimental results on MNIST, FASHION-MNIST, CIFAR10, CIFAR100, and ImageNet datasets demonstrate the strong performance and scalability of our approach. Compared to state-of-the-art non-targeted and targeted attack methods, TtBA consistently delivers superior performance across most experimented datasets and deep learning models. Code is available at https://anonymous.4open.science/r/TtBA-6ECF.