On the Feasibility of Fréchet Radiomic Distance–Constrained Adversarial Examples in Medical Imaging: Methods and Trade-offs

Mohamed Mahmoud, Shehab Khaled, Mohamed Elkhayat, Jamil Fayyad
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:2514-2528, 2026.

Abstract

Adversarial attacks expose critical vulnerabilities in medical imaging AI models; yet, most existing methods violate the textural and structural characteristics that define authentic medical images by disregarding the clinical and radiomic plausibility of the generated perturbations. In this study, we present the first systematic investigation in the existence and feasibility of adversarial examples constrained by the Fr{é}chet Radiomic Distance (FRD) a quantitative measure of radiomic similarity capturing textural, structural, and statistical coherence between images. We formulate a gradient-free, multi objective optimization framework based on Multi Objective Particle Swarm Optimization (MOPSO) operating in the Discrete Cosine Transform (DCT) domain. This framework jointly minimizes FRD and maximizes adversarial deviation, allowing a principled exploration of the trade off between radiomic fidelity and adversarial strength without requiring gradient access. Empirical evidence across multiple medical imaging models demonstrates that enforcing strong FRD constraints (FRD $\leq$ 0.05) dramatically reduces adversarial feasibility. Perturbations preserving radiomic fidelity consistently fail to achieve meaningful adversarial deviation, suggesting that radiomic realism imposes an intrinsic feasibility boundary on adversarial generation. These findings establish radiomic consistency as a fundamental constraint on adversarial vulnerability, offering theoretical and empirical insight toward the development of inherently robust and trustworthy medical imaging AI.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-mahmoud26a, title = {On the Feasibility of Fr{é}chet Radiomic Distance–Constrained Adversarial Examples in Medical Imaging: Methods and Trade-offs}, author = {Mahmoud, Mohamed and Khaled, Shehab and Elkhayat, Mohamed and Fayyad, Jamil}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {2514--2528}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/mahmoud26a/mahmoud26a.pdf}, url = {https://proceedings.mlr.press/v315/mahmoud26a.html}, abstract = {Adversarial attacks expose critical vulnerabilities in medical imaging AI models; yet, most existing methods violate the textural and structural characteristics that define authentic medical images by disregarding the clinical and radiomic plausibility of the generated perturbations. In this study, we present the first systematic investigation in the existence and feasibility of adversarial examples constrained by the Fr{é}chet Radiomic Distance (FRD) a quantitative measure of radiomic similarity capturing textural, structural, and statistical coherence between images. We formulate a gradient-free, multi objective optimization framework based on Multi Objective Particle Swarm Optimization (MOPSO) operating in the Discrete Cosine Transform (DCT) domain. This framework jointly minimizes FRD and maximizes adversarial deviation, allowing a principled exploration of the trade off between radiomic fidelity and adversarial strength without requiring gradient access. Empirical evidence across multiple medical imaging models demonstrates that enforcing strong FRD constraints (FRD $\leq$ 0.05) dramatically reduces adversarial feasibility. Perturbations preserving radiomic fidelity consistently fail to achieve meaningful adversarial deviation, suggesting that radiomic realism imposes an intrinsic feasibility boundary on adversarial generation. These findings establish radiomic consistency as a fundamental constraint on adversarial vulnerability, offering theoretical and empirical insight toward the development of inherently robust and trustworthy medical imaging AI.} }
Endnote
%0 Conference Paper %T On the Feasibility of Fréchet Radiomic Distance–Constrained Adversarial Examples in Medical Imaging: Methods and Trade-offs %A Mohamed Mahmoud %A Shehab Khaled %A Mohamed Elkhayat %A Jamil Fayyad %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-mahmoud26a %I PMLR %P 2514--2528 %U https://proceedings.mlr.press/v315/mahmoud26a.html %V 315 %X Adversarial attacks expose critical vulnerabilities in medical imaging AI models; yet, most existing methods violate the textural and structural characteristics that define authentic medical images by disregarding the clinical and radiomic plausibility of the generated perturbations. In this study, we present the first systematic investigation in the existence and feasibility of adversarial examples constrained by the Fr{é}chet Radiomic Distance (FRD) a quantitative measure of radiomic similarity capturing textural, structural, and statistical coherence between images. We formulate a gradient-free, multi objective optimization framework based on Multi Objective Particle Swarm Optimization (MOPSO) operating in the Discrete Cosine Transform (DCT) domain. This framework jointly minimizes FRD and maximizes adversarial deviation, allowing a principled exploration of the trade off between radiomic fidelity and adversarial strength without requiring gradient access. Empirical evidence across multiple medical imaging models demonstrates that enforcing strong FRD constraints (FRD $\leq$ 0.05) dramatically reduces adversarial feasibility. Perturbations preserving radiomic fidelity consistently fail to achieve meaningful adversarial deviation, suggesting that radiomic realism imposes an intrinsic feasibility boundary on adversarial generation. These findings establish radiomic consistency as a fundamental constraint on adversarial vulnerability, offering theoretical and empirical insight toward the development of inherently robust and trustworthy medical imaging AI.
APA
Mahmoud, M., Khaled, S., Elkhayat, M. & Fayyad, J.. (2026). On the Feasibility of Fréchet Radiomic Distance–Constrained Adversarial Examples in Medical Imaging: Methods and Trade-offs. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:2514-2528 Available from https://proceedings.mlr.press/v315/mahmoud26a.html.

Related Material