NoisyMix: Boosting Model Robustness to Common Corruptions

Benjamin Erichson, Soon Hoe Lim, Winnie Xu, Francisco Utrera, Ziang Cao, Michael Mahoney
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4033-4041, 2024.

Abstract

The robustness of neural networks has become increasingly important in real-world applications where stable and reliable performance is valued over simply achieving high predictive accuracy. To address this, data augmentation techniques have been shown to improve robustness against input perturbations and domain shifts. In this paper, we propose a new training scheme called NoisyMix that leverages noisy augmentations in both input and feature space to improve model robustness and in-domain accuracy. We demonstrate the effectiveness of NoisyMix on several benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Additionally, we provide theoretical analysis to better understand the implicit regularization and robustness properties of NoisyMix.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-erichson24a, title = {{NoisyMix}: Boosting Model Robustness to Common Corruptions}, author = {Erichson, Benjamin and Hoe Lim, Soon and Xu, Winnie and Utrera, Francisco and Cao, Ziang and Mahoney, Michael}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4033--4041}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/erichson24a/erichson24a.pdf}, url = {https://proceedings.mlr.press/v238/erichson24a.html}, abstract = {The robustness of neural networks has become increasingly important in real-world applications where stable and reliable performance is valued over simply achieving high predictive accuracy. To address this, data augmentation techniques have been shown to improve robustness against input perturbations and domain shifts. In this paper, we propose a new training scheme called NoisyMix that leverages noisy augmentations in both input and feature space to improve model robustness and in-domain accuracy. We demonstrate the effectiveness of NoisyMix on several benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Additionally, we provide theoretical analysis to better understand the implicit regularization and robustness properties of NoisyMix.} }
Endnote
%0 Conference Paper %T NoisyMix: Boosting Model Robustness to Common Corruptions %A Benjamin Erichson %A Soon Hoe Lim %A Winnie Xu %A Francisco Utrera %A Ziang Cao %A Michael Mahoney %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-erichson24a %I PMLR %P 4033--4041 %U https://proceedings.mlr.press/v238/erichson24a.html %V 238 %X The robustness of neural networks has become increasingly important in real-world applications where stable and reliable performance is valued over simply achieving high predictive accuracy. To address this, data augmentation techniques have been shown to improve robustness against input perturbations and domain shifts. In this paper, we propose a new training scheme called NoisyMix that leverages noisy augmentations in both input and feature space to improve model robustness and in-domain accuracy. We demonstrate the effectiveness of NoisyMix on several benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Additionally, we provide theoretical analysis to better understand the implicit regularization and robustness properties of NoisyMix.
APA
Erichson, B., Hoe Lim, S., Xu, W., Utrera, F., Cao, Z. & Mahoney, M.. (2024). NoisyMix: Boosting Model Robustness to Common Corruptions. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4033-4041 Available from https://proceedings.mlr.press/v238/erichson24a.html.

Related Material