Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections

Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris Holmes, Mert Gurbuzbalaban, Umut Simsekli
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1249-1260, 2021.

Abstract

Gaussian noise injections (GNIs) are a family of simple and widely-used regularisation methods for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation algorithm, which is typically chosen as stochastic gradient descent (SGD). In this paper, we focus on the so-called ‘implicit effect’ of GNIs, which is the effect of the injected noise on the dynamics of SGD. We show that this effect induces an \emph{asymmetric heavy-tailed noise} on SGD gradient updates. In order to model this modified dynamics, we first develop a Langevin-like stochastic differential equation that is driven by a general family of \emph{asymmetric} heavy-tailed noise. Using this model we then formally prove that GNIs induce an ‘implicit bias’, which varies depending on the heaviness of the tails and the level of asymmetry. Our empirical results confirm that different types of neural networks trained with GNIs are well-modelled by the proposed dynamics and that the implicit effect of these injections induces a bias that degrades the performance of networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-camuto21a, title = {Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections}, author = {Camuto, Alexander and Wang, Xiaoyu and Zhu, Lingjiong and Holmes, Chris and Gurbuzbalaban, Mert and Simsekli, Umut}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1249--1260}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/camuto21a/camuto21a.pdf}, url = {https://proceedings.mlr.press/v139/camuto21a.html}, abstract = {Gaussian noise injections (GNIs) are a family of simple and widely-used regularisation methods for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation algorithm, which is typically chosen as stochastic gradient descent (SGD). In this paper, we focus on the so-called ‘implicit effect’ of GNIs, which is the effect of the injected noise on the dynamics of SGD. We show that this effect induces an \emph{asymmetric heavy-tailed noise} on SGD gradient updates. In order to model this modified dynamics, we first develop a Langevin-like stochastic differential equation that is driven by a general family of \emph{asymmetric} heavy-tailed noise. Using this model we then formally prove that GNIs induce an ‘implicit bias’, which varies depending on the heaviness of the tails and the level of asymmetry. Our empirical results confirm that different types of neural networks trained with GNIs are well-modelled by the proposed dynamics and that the implicit effect of these injections induces a bias that degrades the performance of networks.} }
Endnote
%0 Conference Paper %T Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections %A Alexander Camuto %A Xiaoyu Wang %A Lingjiong Zhu %A Chris Holmes %A Mert Gurbuzbalaban %A Umut Simsekli %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-camuto21a %I PMLR %P 1249--1260 %U https://proceedings.mlr.press/v139/camuto21a.html %V 139 %X Gaussian noise injections (GNIs) are a family of simple and widely-used regularisation methods for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation algorithm, which is typically chosen as stochastic gradient descent (SGD). In this paper, we focus on the so-called ‘implicit effect’ of GNIs, which is the effect of the injected noise on the dynamics of SGD. We show that this effect induces an \emph{asymmetric heavy-tailed noise} on SGD gradient updates. In order to model this modified dynamics, we first develop a Langevin-like stochastic differential equation that is driven by a general family of \emph{asymmetric} heavy-tailed noise. Using this model we then formally prove that GNIs induce an ‘implicit bias’, which varies depending on the heaviness of the tails and the level of asymmetry. Our empirical results confirm that different types of neural networks trained with GNIs are well-modelled by the proposed dynamics and that the implicit effect of these injections induces a bias that degrades the performance of networks.
APA
Camuto, A., Wang, X., Zhu, L., Holmes, C., Gurbuzbalaban, M. & Simsekli, U.. (2021). Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1249-1260 Available from https://proceedings.mlr.press/v139/camuto21a.html.

Related Material