On the Inherent Regularization Effects of Noise Injection During Training

Oussama Dhifallah, Yue Lu
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2665-2675, 2021.

Abstract

Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance. In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. We provide a precise asymptotic characterization of the training and generalization errors of such randomly perturbed learning problems on a random feature model. Our analysis shows that Gaussian noise injection in the training process is equivalent to introducing a weighted ridge regularization, when the number of noise injections tends to infinity. The explicit form of the regularization is also given. Numerical results corroborate our asymptotic predictions, showing that they are accurate even in moderate problem dimensions. Our theoretical predictions are based on a new correlated Gaussian equivalence conjecture that generalizes recent results in the study of random feature models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-dhifallah21a, title = {On the Inherent Regularization Effects of Noise Injection During Training}, author = {Dhifallah, Oussama and Lu, Yue}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2665--2675}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/dhifallah21a/dhifallah21a.pdf}, url = {https://proceedings.mlr.press/v139/dhifallah21a.html}, abstract = {Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance. In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. We provide a precise asymptotic characterization of the training and generalization errors of such randomly perturbed learning problems on a random feature model. Our analysis shows that Gaussian noise injection in the training process is equivalent to introducing a weighted ridge regularization, when the number of noise injections tends to infinity. The explicit form of the regularization is also given. Numerical results corroborate our asymptotic predictions, showing that they are accurate even in moderate problem dimensions. Our theoretical predictions are based on a new correlated Gaussian equivalence conjecture that generalizes recent results in the study of random feature models.} }
Endnote
%0 Conference Paper %T On the Inherent Regularization Effects of Noise Injection During Training %A Oussama Dhifallah %A Yue Lu %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-dhifallah21a %I PMLR %P 2665--2675 %U https://proceedings.mlr.press/v139/dhifallah21a.html %V 139 %X Randomly perturbing networks during the training process is a commonly used approach to improving generalization performance. In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. We provide a precise asymptotic characterization of the training and generalization errors of such randomly perturbed learning problems on a random feature model. Our analysis shows that Gaussian noise injection in the training process is equivalent to introducing a weighted ridge regularization, when the number of noise injections tends to infinity. The explicit form of the regularization is also given. Numerical results corroborate our asymptotic predictions, showing that they are accurate even in moderate problem dimensions. Our theoretical predictions are based on a new correlated Gaussian equivalence conjecture that generalizes recent results in the study of random feature models.
APA
Dhifallah, O. & Lu, Y.. (2021). On the Inherent Regularization Effects of Noise Injection During Training. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2665-2675 Available from https://proceedings.mlr.press/v139/dhifallah21a.html.

Related Material