Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3883-3893, 2020.

Abstract

Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the state-of-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under L2 perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-zhang20h, title = {Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models}, author = {Zhang, Xiao and Chen, Jinghui and Gu, Quanquan and Evans, David}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3883--3893}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/zhang20h/zhang20h.pdf}, url = {https://proceedings.mlr.press/v108/zhang20h.html}, abstract = {Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the state-of-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under L2 perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models.} }
Endnote
%0 Conference Paper %T Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models %A Xiao Zhang %A Jinghui Chen %A Quanquan Gu %A David Evans %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-zhang20h %I PMLR %P 3883--3893 %U https://proceedings.mlr.press/v108/zhang20h.html %V 108 %X Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the state-of-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under L2 perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models.
APA
Zhang, X., Chen, J., Gu, Q. & Evans, D.. (2020). Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3883-3893 Available from https://proceedings.mlr.press/v108/zhang20h.html.

Related Material