The Shattered Gradients Problem: If resnets are the answer, then what is the question?

David Balduzzi, Marcus Frean, Lennox Leary, J. P. Lewis, Kurt Wan-Duo Ma, Brian McWilliams
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:342-350, 2017.

Abstract

A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new “looks linear” (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-balduzzi17b, title = {The Shattered Gradients Problem: If resnets are the answer, then what is the question?}, author = {David Balduzzi and Marcus Frean and Lennox Leary and J. P. Lewis and Kurt Wan-Duo Ma and Brian McWilliams}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {342--350}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/balduzzi17b/balduzzi17b.pdf}, url = {https://proceedings.mlr.press/v70/balduzzi17b.html}, abstract = {A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new “looks linear” (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.} }
Endnote
%0 Conference Paper %T The Shattered Gradients Problem: If resnets are the answer, then what is the question? %A David Balduzzi %A Marcus Frean %A Lennox Leary %A J. P. Lewis %A Kurt Wan-Duo Ma %A Brian McWilliams %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-balduzzi17b %I PMLR %P 342--350 %U https://proceedings.mlr.press/v70/balduzzi17b.html %V 70 %X A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new “looks linear” (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.
APA
Balduzzi, D., Frean, M., Leary, L., Lewis, J.P., Ma, K.W. & McWilliams, B.. (2017). The Shattered Gradients Problem: If resnets are the answer, then what is the question?. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:342-350 Available from https://proceedings.mlr.press/v70/balduzzi17b.html.

Related Material