A Manifold View of Adversarial Risk

Wenjia Zhang, Yikai Zhang, Xiaoling Hu, Mayank Goswami, Chao Chen, Dimitris N. Metaxas
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:11598-11614, 2022.

Abstract

The adversarial risk of a machine learning model has been widely studied. Most previous works assume that the data lies in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero. We finalize the paper with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier by only focusing on the normal adversarial risk.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-zhang22h, title = { A Manifold View of Adversarial Risk }, author = {Zhang, Wenjia and Zhang, Yikai and Hu, Xiaoling and Goswami, Mayank and Chen, Chao and Metaxas, Dimitris N.}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {11598--11614}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/zhang22h/zhang22h.pdf}, url = {https://proceedings.mlr.press/v151/zhang22h.html}, abstract = { The adversarial risk of a machine learning model has been widely studied. Most previous works assume that the data lies in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero. We finalize the paper with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier by only focusing on the normal adversarial risk. } }
Endnote
%0 Conference Paper %T A Manifold View of Adversarial Risk %A Wenjia Zhang %A Yikai Zhang %A Xiaoling Hu %A Mayank Goswami %A Chao Chen %A Dimitris N. Metaxas %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-zhang22h %I PMLR %P 11598--11614 %U https://proceedings.mlr.press/v151/zhang22h.html %V 151 %X The adversarial risk of a machine learning model has been widely studied. Most previous works assume that the data lies in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero. We finalize the paper with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier by only focusing on the normal adversarial risk.
APA
Zhang, W., Zhang, Y., Hu, X., Goswami, M., Chen, C. & Metaxas, D.N.. (2022). A Manifold View of Adversarial Risk . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:11598-11614 Available from https://proceedings.mlr.press/v151/zhang22h.html.

Related Material