Better Training using Weight-Constrained Stochastic Dynamics

Benedict Leimkuhler, Tiffany J Vlaar, Timothée Pouchon, Amos Storkey
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6200-6211, 2021.

Abstract

We employ constraints to control the parameter space of deep neural networks throughout training. The use of customised, appropriately designed constraints can reduce the vanishing/exploding gradients problem, improve smoothness of classification boundaries, control weight magnitudes and stabilize deep neural networks, and thus enhance the robustness of training algorithms and the generalization capabilities of neural networks. We provide a general approach to efficiently incorporate constraints into a stochastic gradient Langevin framework, allowing enhanced exploration of the loss landscape. We also present specific examples of constrained training methods motivated by orthogonality preservation for weight matrices and explicit weight normalizations. Discretization schemes are provided both for the overdamped formulation of Langevin dynamics and the underdamped form, in which momenta further improve sampling efficiency. These optimisation schemes can be used directly, without needing to adapt neural network architecture design choices or to modify the objective with regularization terms, and see performance improvements in classification tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-leimkuhler21a, title = {Better Training using Weight-Constrained Stochastic Dynamics}, author = {Leimkuhler, Benedict and Vlaar, Tiffany J and Pouchon, Timoth{\'e}e and Storkey, Amos}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6200--6211}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/leimkuhler21a/leimkuhler21a.pdf}, url = {https://proceedings.mlr.press/v139/leimkuhler21a.html}, abstract = {We employ constraints to control the parameter space of deep neural networks throughout training. The use of customised, appropriately designed constraints can reduce the vanishing/exploding gradients problem, improve smoothness of classification boundaries, control weight magnitudes and stabilize deep neural networks, and thus enhance the robustness of training algorithms and the generalization capabilities of neural networks. We provide a general approach to efficiently incorporate constraints into a stochastic gradient Langevin framework, allowing enhanced exploration of the loss landscape. We also present specific examples of constrained training methods motivated by orthogonality preservation for weight matrices and explicit weight normalizations. Discretization schemes are provided both for the overdamped formulation of Langevin dynamics and the underdamped form, in which momenta further improve sampling efficiency. These optimisation schemes can be used directly, without needing to adapt neural network architecture design choices or to modify the objective with regularization terms, and see performance improvements in classification tasks.} }
Endnote
%0 Conference Paper %T Better Training using Weight-Constrained Stochastic Dynamics %A Benedict Leimkuhler %A Tiffany J Vlaar %A Timothée Pouchon %A Amos Storkey %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-leimkuhler21a %I PMLR %P 6200--6211 %U https://proceedings.mlr.press/v139/leimkuhler21a.html %V 139 %X We employ constraints to control the parameter space of deep neural networks throughout training. The use of customised, appropriately designed constraints can reduce the vanishing/exploding gradients problem, improve smoothness of classification boundaries, control weight magnitudes and stabilize deep neural networks, and thus enhance the robustness of training algorithms and the generalization capabilities of neural networks. We provide a general approach to efficiently incorporate constraints into a stochastic gradient Langevin framework, allowing enhanced exploration of the loss landscape. We also present specific examples of constrained training methods motivated by orthogonality preservation for weight matrices and explicit weight normalizations. Discretization schemes are provided both for the overdamped formulation of Langevin dynamics and the underdamped form, in which momenta further improve sampling efficiency. These optimisation schemes can be used directly, without needing to adapt neural network architecture design choices or to modify the objective with regularization terms, and see performance improvements in classification tasks.
APA
Leimkuhler, B., Vlaar, T.J., Pouchon, T. & Storkey, A.. (2021). Better Training using Weight-Constrained Stochastic Dynamics. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6200-6211 Available from https://proceedings.mlr.press/v139/leimkuhler21a.html.

Related Material