[edit]
On the Feasibility of Fréchet Radiomic Distance–Constrained Adversarial Examples in Medical Imaging: Methods and Trade-offs
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:2514-2528, 2026.
Abstract
Adversarial attacks expose critical vulnerabilities in medical imaging AI models; yet, most existing methods violate the textural and structural characteristics that define authentic medical images by disregarding the clinical and radiomic plausibility of the generated perturbations. In this study, we present the first systematic investigation in the existence and feasibility of adversarial examples constrained by the Fr{é}chet Radiomic Distance (FRD) a quantitative measure of radiomic similarity capturing textural, structural, and statistical coherence between images. We formulate a gradient-free, multi objective optimization framework based on Multi Objective Particle Swarm Optimization (MOPSO) operating in the Discrete Cosine Transform (DCT) domain. This framework jointly minimizes FRD and maximizes adversarial deviation, allowing a principled exploration of the trade off between radiomic fidelity and adversarial strength without requiring gradient access. Empirical evidence across multiple medical imaging models demonstrates that enforcing strong FRD constraints (FRD $\leq$ 0.05) dramatically reduces adversarial feasibility. Perturbations preserving radiomic fidelity consistently fail to achieve meaningful adversarial deviation, suggesting that radiomic realism imposes an intrinsic feasibility boundary on adversarial generation. These findings establish radiomic consistency as a fundamental constraint on adversarial vulnerability, offering theoretical and empirical insight toward the development of inherently robust and trustworthy medical imaging AI.