Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics

Aleksandr Gushchin, Khaled Abud, Georgii Bychkov, Ekaterina Shumitskaya, Anna Chistyakova, Sergey Lavrushkin, Bader Rasheed, Kirill Malyshev, Dmitriy S. Vatolin, Anastasia Antsiferova
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:21444-21470, 2025.

Abstract

Modern neural-network-based Image Quality Assessment (IQA) metrics are vulnerable to adversarial attacks, which can be exploited to manipulate search engine rankings, benchmark results, and content quality assessments, raising concerns about the reliability of IQA metrics in critical applications. This paper presents the first comprehensive study of IQA defense mechanisms in response to adversarial attacks on these metrics to pave the way for safer use of IQA metrics. We systematically evaluated 30 defense strategies, including purification, training-based, and certified methods — and applied 14 adversarial attacks in adaptive and non-adaptive settings to compare these defenses on 9 no-reference IQA metrics. Our proposed benchmark aims to guide the development of IQA defense methods and is open to submissions; the latest results and code are at https://msu-video-group.github.io/adversarial-defenses-for-iqa/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-gushchin25a, title = {Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics}, author = {Gushchin, Aleksandr and Abud, Khaled and Bychkov, Georgii and Shumitskaya, Ekaterina and Chistyakova, Anna and Lavrushkin, Sergey and Rasheed, Bader and Malyshev, Kirill and Vatolin, Dmitriy S. and Antsiferova, Anastasia}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {21444--21470}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/gushchin25a/gushchin25a.pdf}, url = {https://proceedings.mlr.press/v267/gushchin25a.html}, abstract = {Modern neural-network-based Image Quality Assessment (IQA) metrics are vulnerable to adversarial attacks, which can be exploited to manipulate search engine rankings, benchmark results, and content quality assessments, raising concerns about the reliability of IQA metrics in critical applications. This paper presents the first comprehensive study of IQA defense mechanisms in response to adversarial attacks on these metrics to pave the way for safer use of IQA metrics. We systematically evaluated 30 defense strategies, including purification, training-based, and certified methods — and applied 14 adversarial attacks in adaptive and non-adaptive settings to compare these defenses on 9 no-reference IQA metrics. Our proposed benchmark aims to guide the development of IQA defense methods and is open to submissions; the latest results and code are at https://msu-video-group.github.io/adversarial-defenses-for-iqa/.} }
Endnote
%0 Conference Paper %T Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics %A Aleksandr Gushchin %A Khaled Abud %A Georgii Bychkov %A Ekaterina Shumitskaya %A Anna Chistyakova %A Sergey Lavrushkin %A Bader Rasheed %A Kirill Malyshev %A Dmitriy S. Vatolin %A Anastasia Antsiferova %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-gushchin25a %I PMLR %P 21444--21470 %U https://proceedings.mlr.press/v267/gushchin25a.html %V 267 %X Modern neural-network-based Image Quality Assessment (IQA) metrics are vulnerable to adversarial attacks, which can be exploited to manipulate search engine rankings, benchmark results, and content quality assessments, raising concerns about the reliability of IQA metrics in critical applications. This paper presents the first comprehensive study of IQA defense mechanisms in response to adversarial attacks on these metrics to pave the way for safer use of IQA metrics. We systematically evaluated 30 defense strategies, including purification, training-based, and certified methods — and applied 14 adversarial attacks in adaptive and non-adaptive settings to compare these defenses on 9 no-reference IQA metrics. Our proposed benchmark aims to guide the development of IQA defense methods and is open to submissions; the latest results and code are at https://msu-video-group.github.io/adversarial-defenses-for-iqa/.
APA
Gushchin, A., Abud, K., Bychkov, G., Shumitskaya, E., Chistyakova, A., Lavrushkin, S., Rasheed, B., Malyshev, K., Vatolin, D.S. & Antsiferova, A.. (2025). Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:21444-21470 Available from https://proceedings.mlr.press/v267/gushchin25a.html.

Related Material