Robust Perception through Equivariance

Chengzhi Mao, Lingyu Zhang, Abhishek Vaibhav Joshi, Junfeng Yang, Hao Wang, Carl Vondrick
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:23852-23870, 2023.

Abstract

Deep networks for computer vision are not reliable when they encounter adversarial examples. In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference. By introducing constraints at inference time, we can shift the burden of robustness from training to testing, thereby allowing the model to dynamically adjust to each individual image’s unique and potentially novel characteristics at inference time. Our theoretical results show the importance of having dense constraints at inference time. In contrast to existing single-constraint methods, we propose to use equivariance, which naturally allows dense constraints at a fine-grained level in the feature space. Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-mao23d, title = {Robust Perception through Equivariance}, author = {Mao, Chengzhi and Zhang, Lingyu and Joshi, Abhishek Vaibhav and Yang, Junfeng and Wang, Hao and Vondrick, Carl}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {23852--23870}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/mao23d/mao23d.pdf}, url = {https://proceedings.mlr.press/v202/mao23d.html}, abstract = {Deep networks for computer vision are not reliable when they encounter adversarial examples. In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference. By introducing constraints at inference time, we can shift the burden of robustness from training to testing, thereby allowing the model to dynamically adjust to each individual image’s unique and potentially novel characteristics at inference time. Our theoretical results show the importance of having dense constraints at inference time. In contrast to existing single-constraint methods, we propose to use equivariance, which naturally allows dense constraints at a fine-grained level in the feature space. Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks.} }
Endnote
%0 Conference Paper %T Robust Perception through Equivariance %A Chengzhi Mao %A Lingyu Zhang %A Abhishek Vaibhav Joshi %A Junfeng Yang %A Hao Wang %A Carl Vondrick %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-mao23d %I PMLR %P 23852--23870 %U https://proceedings.mlr.press/v202/mao23d.html %V 202 %X Deep networks for computer vision are not reliable when they encounter adversarial examples. In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference. By introducing constraints at inference time, we can shift the burden of robustness from training to testing, thereby allowing the model to dynamically adjust to each individual image’s unique and potentially novel characteristics at inference time. Our theoretical results show the importance of having dense constraints at inference time. In contrast to existing single-constraint methods, we propose to use equivariance, which naturally allows dense constraints at a fine-grained level in the feature space. Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks.
APA
Mao, C., Zhang, L., Joshi, A.V., Yang, J., Wang, H. & Vondrick, C.. (2023). Robust Perception through Equivariance. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:23852-23870 Available from https://proceedings.mlr.press/v202/mao23d.html.

Related Material