Neuron birth-death dynamics accelerates gradient descent and converges asymptotically

Grant Rotskoff, Samy Jelassi, Joan Bruna, Eric Vanden-Eijnden
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5508-5517, 2019.

Abstract

Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of models with a large number of parameters. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth/death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-rotskoff19a, title = {Neuron birth-death dynamics accelerates gradient descent and converges asymptotically}, author = {Rotskoff, Grant and Jelassi, Samy and Bruna, Joan and Vanden-Eijnden, Eric}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5508--5517}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/rotskoff19a/rotskoff19a.pdf}, url = {https://proceedings.mlr.press/v97/rotskoff19a.html}, abstract = {Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of models with a large number of parameters. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth/death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated.} }
Endnote
%0 Conference Paper %T Neuron birth-death dynamics accelerates gradient descent and converges asymptotically %A Grant Rotskoff %A Samy Jelassi %A Joan Bruna %A Eric Vanden-Eijnden %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-rotskoff19a %I PMLR %P 5508--5517 %U https://proceedings.mlr.press/v97/rotskoff19a.html %V 97 %X Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of models with a large number of parameters. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth/death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated.
APA
Rotskoff, G., Jelassi, S., Bruna, J. & Vanden-Eijnden, E.. (2019). Neuron birth-death dynamics accelerates gradient descent and converges asymptotically. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5508-5517 Available from https://proceedings.mlr.press/v97/rotskoff19a.html.

Related Material