Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition

Shuai Yuan, Hongwei Li, Rui Zhang, Hangcheng Cao, Wenbo Jiang, Tao Ni, Wenshu Fan, Qingchuan Zhao, Guowen Xu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:73541-73552, 2025.

Abstract

Deep learning models employed in face recognition (FR) systems have been shown to be vulnerable to physical adversarial attacks through various modalities, including patches, projections, and infrared radiation. However, existing adversarial examples targeting FR systems often suffer from issues such as conspicuousness, limited effectiveness, and insufficient robustness. To address these challenges, we propose a novel approach for adversarial face generation, UVHat, which utilizes ultraviolet (UV) emitters mounted on a hat to enable invisible and potent attacks in black-box settings. Specifically, UVHat simulates UV light sources via video interpolation and models the positions of these light sources on a curved surface, specifically the human head in our study. To optimize attack performance, UVHat integrates a reinforcement learning-based optimization strategy, which explores a vast parameter search space, encompassing factors such as shooting distance, power, and wavelength. Extensive experimental evaluations validate that UVHat substantially improves the attack success rate in black-box settings, enabling adversarial attacks from multiple angles with enhanced robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-yuan25e, title = {Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition}, author = {Yuan, Shuai and Li, Hongwei and Zhang, Rui and Cao, Hangcheng and Jiang, Wenbo and Ni, Tao and Fan, Wenshu and Zhao, Qingchuan and Xu, Guowen}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {73541--73552}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/yuan25e/yuan25e.pdf}, url = {https://proceedings.mlr.press/v267/yuan25e.html}, abstract = {Deep learning models employed in face recognition (FR) systems have been shown to be vulnerable to physical adversarial attacks through various modalities, including patches, projections, and infrared radiation. However, existing adversarial examples targeting FR systems often suffer from issues such as conspicuousness, limited effectiveness, and insufficient robustness. To address these challenges, we propose a novel approach for adversarial face generation, UVHat, which utilizes ultraviolet (UV) emitters mounted on a hat to enable invisible and potent attacks in black-box settings. Specifically, UVHat simulates UV light sources via video interpolation and models the positions of these light sources on a curved surface, specifically the human head in our study. To optimize attack performance, UVHat integrates a reinforcement learning-based optimization strategy, which explores a vast parameter search space, encompassing factors such as shooting distance, power, and wavelength. Extensive experimental evaluations validate that UVHat substantially improves the attack success rate in black-box settings, enabling adversarial attacks from multiple angles with enhanced robustness.} }
Endnote
%0 Conference Paper %T Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition %A Shuai Yuan %A Hongwei Li %A Rui Zhang %A Hangcheng Cao %A Wenbo Jiang %A Tao Ni %A Wenshu Fan %A Qingchuan Zhao %A Guowen Xu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-yuan25e %I PMLR %P 73541--73552 %U https://proceedings.mlr.press/v267/yuan25e.html %V 267 %X Deep learning models employed in face recognition (FR) systems have been shown to be vulnerable to physical adversarial attacks through various modalities, including patches, projections, and infrared radiation. However, existing adversarial examples targeting FR systems often suffer from issues such as conspicuousness, limited effectiveness, and insufficient robustness. To address these challenges, we propose a novel approach for adversarial face generation, UVHat, which utilizes ultraviolet (UV) emitters mounted on a hat to enable invisible and potent attacks in black-box settings. Specifically, UVHat simulates UV light sources via video interpolation and models the positions of these light sources on a curved surface, specifically the human head in our study. To optimize attack performance, UVHat integrates a reinforcement learning-based optimization strategy, which explores a vast parameter search space, encompassing factors such as shooting distance, power, and wavelength. Extensive experimental evaluations validate that UVHat substantially improves the attack success rate in black-box settings, enabling adversarial attacks from multiple angles with enhanced robustness.
APA
Yuan, S., Li, H., Zhang, R., Cao, H., Jiang, W., Ni, T., Fan, W., Zhao, Q. & Xu, G.. (2025). Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:73541-73552 Available from https://proceedings.mlr.press/v267/yuan25e.html.

Related Material