Anticorrelated Noise Injection for Improved Generalization

Antonio Orvieto, Hans Kersting, Frank Proske, Francis Bach, Aurelien Lucchi
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:17094-17116, 2022.

Abstract

Injecting artificial noise into gradient descent (GD) is commonly employed to improve the performance of machine learning models. Usually, uncorrelated noise is used in such perturbed gradient descent (PGD) methods. It is, however, not known if this is optimal or whether other types of noise could provide better generalization performance. In this paper, we zoom in on the problem of correlating the perturbations of consecutive PGD steps. We consider a variety of objective functions for which we find that GD with anticorrelated perturbations ("Anti-PGD") generalizes significantly better than GD and standard (uncorrelated) PGD. To support these experimental findings, we also derive a theoretical analysis that demonstrates that Anti-PGD moves to wider minima, while GD and PGD remain stuck in suboptimal regions or even diverge. This new connection between anticorrelated noise and generalization opens the field to novel ways to exploit noise for training machine learning models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-orvieto22a, title = {Anticorrelated Noise Injection for Improved Generalization}, author = {Orvieto, Antonio and Kersting, Hans and Proske, Frank and Bach, Francis and Lucchi, Aurelien}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {17094--17116}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/orvieto22a/orvieto22a.pdf}, url = {https://proceedings.mlr.press/v162/orvieto22a.html}, abstract = {Injecting artificial noise into gradient descent (GD) is commonly employed to improve the performance of machine learning models. Usually, uncorrelated noise is used in such perturbed gradient descent (PGD) methods. It is, however, not known if this is optimal or whether other types of noise could provide better generalization performance. In this paper, we zoom in on the problem of correlating the perturbations of consecutive PGD steps. We consider a variety of objective functions for which we find that GD with anticorrelated perturbations ("Anti-PGD") generalizes significantly better than GD and standard (uncorrelated) PGD. To support these experimental findings, we also derive a theoretical analysis that demonstrates that Anti-PGD moves to wider minima, while GD and PGD remain stuck in suboptimal regions or even diverge. This new connection between anticorrelated noise and generalization opens the field to novel ways to exploit noise for training machine learning models.} }
Endnote
%0 Conference Paper %T Anticorrelated Noise Injection for Improved Generalization %A Antonio Orvieto %A Hans Kersting %A Frank Proske %A Francis Bach %A Aurelien Lucchi %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-orvieto22a %I PMLR %P 17094--17116 %U https://proceedings.mlr.press/v162/orvieto22a.html %V 162 %X Injecting artificial noise into gradient descent (GD) is commonly employed to improve the performance of machine learning models. Usually, uncorrelated noise is used in such perturbed gradient descent (PGD) methods. It is, however, not known if this is optimal or whether other types of noise could provide better generalization performance. In this paper, we zoom in on the problem of correlating the perturbations of consecutive PGD steps. We consider a variety of objective functions for which we find that GD with anticorrelated perturbations ("Anti-PGD") generalizes significantly better than GD and standard (uncorrelated) PGD. To support these experimental findings, we also derive a theoretical analysis that demonstrates that Anti-PGD moves to wider minima, while GD and PGD remain stuck in suboptimal regions or even diverge. This new connection between anticorrelated noise and generalization opens the field to novel ways to exploit noise for training machine learning models.
APA
Orvieto, A., Kersting, H., Proske, F., Bach, F. & Lucchi, A.. (2022). Anticorrelated Noise Injection for Improved Generalization. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:17094-17116 Available from https://proceedings.mlr.press/v162/orvieto22a.html.

Related Material