Decoupled Parallel Backpropagation with Convergence Guarantee

Zhouyuan Huo, Bin Gu,  Yang, Heng Huang
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2098-2106, 2018.

Abstract

Backpropagation algorithm is indispensable for the training of feedforward neural networks. It requires propagating error gradients sequentially from the output layer all the way back to the input layer. The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources. Recently, several algorithms have been proposed for breaking the backward locking. However, their performances degrade seriously when networks are deep. In this paper, we propose decoupled parallel backpropagation algorithm for deep learning optimization with convergence guarantee. Firstly, we decouple the backpropagation algorithm using delayed gradients, and show that the backward locking is removed when we split the networks into multiple modules. Then, we utilize decoupled parallel backpropagation in two stochastic methods and prove that our method guarantees convergence to critical points for the non-convex problem. Finally, we perform experiments for training deep convolutional neural networks on benchmark datasets. The experimental results not only confirm our theoretical analysis, but also demonstrate that the proposed method can achieve significant speedup without loss of accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-huo18a, title = {Decoupled Parallel Backpropagation with Convergence Guarantee}, author = {Huo, Zhouyuan and Gu, Bin and qian Yang and Huang, Heng}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2098--2106}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/huo18a/huo18a.pdf}, url = {https://proceedings.mlr.press/v80/huo18a.html}, abstract = {Backpropagation algorithm is indispensable for the training of feedforward neural networks. It requires propagating error gradients sequentially from the output layer all the way back to the input layer. The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources. Recently, several algorithms have been proposed for breaking the backward locking. However, their performances degrade seriously when networks are deep. In this paper, we propose decoupled parallel backpropagation algorithm for deep learning optimization with convergence guarantee. Firstly, we decouple the backpropagation algorithm using delayed gradients, and show that the backward locking is removed when we split the networks into multiple modules. Then, we utilize decoupled parallel backpropagation in two stochastic methods and prove that our method guarantees convergence to critical points for the non-convex problem. Finally, we perform experiments for training deep convolutional neural networks on benchmark datasets. The experimental results not only confirm our theoretical analysis, but also demonstrate that the proposed method can achieve significant speedup without loss of accuracy.} }
Endnote
%0 Conference Paper %T Decoupled Parallel Backpropagation with Convergence Guarantee %A Zhouyuan Huo %A Bin Gu %A Yang %A Heng Huang %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-huo18a %I PMLR %P 2098--2106 %U https://proceedings.mlr.press/v80/huo18a.html %V 80 %X Backpropagation algorithm is indispensable for the training of feedforward neural networks. It requires propagating error gradients sequentially from the output layer all the way back to the input layer. The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources. Recently, several algorithms have been proposed for breaking the backward locking. However, their performances degrade seriously when networks are deep. In this paper, we propose decoupled parallel backpropagation algorithm for deep learning optimization with convergence guarantee. Firstly, we decouple the backpropagation algorithm using delayed gradients, and show that the backward locking is removed when we split the networks into multiple modules. Then, we utilize decoupled parallel backpropagation in two stochastic methods and prove that our method guarantees convergence to critical points for the non-convex problem. Finally, we perform experiments for training deep convolutional neural networks on benchmark datasets. The experimental results not only confirm our theoretical analysis, but also demonstrate that the proposed method can achieve significant speedup without loss of accuracy.
APA
Huo, Z., Gu, B., Yang, & Huang, H.. (2018). Decoupled Parallel Backpropagation with Convergence Guarantee. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2098-2106 Available from https://proceedings.mlr.press/v80/huo18a.html.

Related Material