Understanding Synthetic Gradients and Decoupled Neural Interfaces

Wojciech Marian Czarnecki, Grzegorz Świrszcz, Max Jaderberg, Simon Osindero, Oriol Vinyals, Koray Kavukcuoglu
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:904-912, 2017.

Abstract

When training neural networks, the use of Synthetic Gradients (SG) allows layers or modules to be trained without update locking – without waiting for a true error gradient to be backpropagated – resulting in Decoupled Neural Interfaces (DNIs). This unlocked ability of being able to update parts of a neural network asynchronously and with only local information was demonstrated to work empirically in Jaderberg et al (2016). However, there has been very little demonstration of what changes DNIs and SGs impose from a functional, representational, and learning dynamics point of view. In this paper, we study DNIs through the use of synthetic gradients on feed-forward networks to better understand their behaviour and elucidate their effect on optimisation. We show that the incorporation of SGs does not affect the representational strength of the learning system for a neural network, and prove the convergence of the learning system for linear and deep linear models. On practical problems we investigate the mechanism by which synthetic gradient estimators approximate the true loss, and, surprisingly, how that leads to drastically different layer-wise representations. Finally, we also expose the relationship of using synthetic gradients to other error approximation techniques and find a unifying language for discussion and comparison.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-czarnecki17a, title = {Understanding Synthetic Gradients and Decoupled Neural Interfaces}, author = {Wojciech Marian Czarnecki and Grzegorz {\'{S}}wirszcz and Max Jaderberg and Simon Osindero and Oriol Vinyals and Koray Kavukcuoglu}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {904--912}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/czarnecki17a/czarnecki17a.pdf}, url = {https://proceedings.mlr.press/v70/czarnecki17a.html}, abstract = {When training neural networks, the use of Synthetic Gradients (SG) allows layers or modules to be trained without update locking – without waiting for a true error gradient to be backpropagated – resulting in Decoupled Neural Interfaces (DNIs). This unlocked ability of being able to update parts of a neural network asynchronously and with only local information was demonstrated to work empirically in Jaderberg et al (2016). However, there has been very little demonstration of what changes DNIs and SGs impose from a functional, representational, and learning dynamics point of view. In this paper, we study DNIs through the use of synthetic gradients on feed-forward networks to better understand their behaviour and elucidate their effect on optimisation. We show that the incorporation of SGs does not affect the representational strength of the learning system for a neural network, and prove the convergence of the learning system for linear and deep linear models. On practical problems we investigate the mechanism by which synthetic gradient estimators approximate the true loss, and, surprisingly, how that leads to drastically different layer-wise representations. Finally, we also expose the relationship of using synthetic gradients to other error approximation techniques and find a unifying language for discussion and comparison.} }
Endnote
%0 Conference Paper %T Understanding Synthetic Gradients and Decoupled Neural Interfaces %A Wojciech Marian Czarnecki %A Grzegorz Świrszcz %A Max Jaderberg %A Simon Osindero %A Oriol Vinyals %A Koray Kavukcuoglu %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-czarnecki17a %I PMLR %P 904--912 %U https://proceedings.mlr.press/v70/czarnecki17a.html %V 70 %X When training neural networks, the use of Synthetic Gradients (SG) allows layers or modules to be trained without update locking – without waiting for a true error gradient to be backpropagated – resulting in Decoupled Neural Interfaces (DNIs). This unlocked ability of being able to update parts of a neural network asynchronously and with only local information was demonstrated to work empirically in Jaderberg et al (2016). However, there has been very little demonstration of what changes DNIs and SGs impose from a functional, representational, and learning dynamics point of view. In this paper, we study DNIs through the use of synthetic gradients on feed-forward networks to better understand their behaviour and elucidate their effect on optimisation. We show that the incorporation of SGs does not affect the representational strength of the learning system for a neural network, and prove the convergence of the learning system for linear and deep linear models. On practical problems we investigate the mechanism by which synthetic gradient estimators approximate the true loss, and, surprisingly, how that leads to drastically different layer-wise representations. Finally, we also expose the relationship of using synthetic gradients to other error approximation techniques and find a unifying language for discussion and comparison.
APA
Czarnecki, W.M., Świrszcz, G., Jaderberg, M., Osindero, S., Vinyals, O. & Kavukcuoglu, K.. (2017). Understanding Synthetic Gradients and Decoupled Neural Interfaces. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:904-912 Available from https://proceedings.mlr.press/v70/czarnecki17a.html.

Related Material