Expected Gradients of Maxout Networks and Consequences to Parameter Initialization

Hanna Tseran, Guido Montufar
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:34491-34532, 2023.

Abstract

We study the gradients of a maxout network with respect to inputs and parameters and obtain bounds for the moments depending on the architecture and the parameter distribution. We observe that the distribution of the input-output Jacobian depends on the input, which complicates a stable parameter initialization. Based on the moments of the gradients, we formulate parameter initialization strategies that avoid vanishing and exploding gradients in wide networks. Experiments with deep fully-connected and convolutional networks show that this strategy improves SGD and Adam training of deep maxout networks. In addition, we obtain refined bounds on the expected number of linear regions, results on the expected curve length distortion, and results on the NTK.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-tseran23a, title = {Expected Gradients of Maxout Networks and Consequences to Parameter Initialization}, author = {Tseran, Hanna and Montufar, Guido}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {34491--34532}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/tseran23a/tseran23a.pdf}, url = {https://proceedings.mlr.press/v202/tseran23a.html}, abstract = {We study the gradients of a maxout network with respect to inputs and parameters and obtain bounds for the moments depending on the architecture and the parameter distribution. We observe that the distribution of the input-output Jacobian depends on the input, which complicates a stable parameter initialization. Based on the moments of the gradients, we formulate parameter initialization strategies that avoid vanishing and exploding gradients in wide networks. Experiments with deep fully-connected and convolutional networks show that this strategy improves SGD and Adam training of deep maxout networks. In addition, we obtain refined bounds on the expected number of linear regions, results on the expected curve length distortion, and results on the NTK.} }
Endnote
%0 Conference Paper %T Expected Gradients of Maxout Networks and Consequences to Parameter Initialization %A Hanna Tseran %A Guido Montufar %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-tseran23a %I PMLR %P 34491--34532 %U https://proceedings.mlr.press/v202/tseran23a.html %V 202 %X We study the gradients of a maxout network with respect to inputs and parameters and obtain bounds for the moments depending on the architecture and the parameter distribution. We observe that the distribution of the input-output Jacobian depends on the input, which complicates a stable parameter initialization. Based on the moments of the gradients, we formulate parameter initialization strategies that avoid vanishing and exploding gradients in wide networks. Experiments with deep fully-connected and convolutional networks show that this strategy improves SGD and Adam training of deep maxout networks. In addition, we obtain refined bounds on the expected number of linear regions, results on the expected curve length distortion, and results on the NTK.
APA
Tseran, H. & Montufar, G.. (2023). Expected Gradients of Maxout Networks and Consequences to Parameter Initialization. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:34491-34532 Available from https://proceedings.mlr.press/v202/tseran23a.html.

Related Material