The role of optimization geometry in single neuron learning

Nicholas Boffi, Stephen Tu, Jean-Jacques Slotine
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:11528-11549, 2022.

Abstract

Recent numerical experiments have demonstrated that the choice of optimization geometry used during training can impact generalization performance when learning expressive nonlinear model classes such as deep neural networks. These observations have important implications for modern deep learning, but remain poorly understood due to the difficulty of the associated nonconvex optimization. Towards an understanding of this phenomenon, we analyze a family of pseudogradient methods for learning generalized linear models under the square loss – a simplified problem containing both nonlinearity in the model parameters and nonconvexity of the optimization which admits a single neuron as a special case. We prove non-asymptotic bounds on the generalization error that sharply characterize how the interplay between the optimization geometry and the feature space geometry sets the out-of-sample performance of the learned model. Experimentally, selecting the optimization geometry as suggested by our theory leads to improved performance in generalized linear model estimation problems such as nonlinear and nonconvex variants of sparse vector recovery and low-rank matrix sensing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-boffi22a, title = { The role of optimization geometry in single neuron learning }, author = {Boffi, Nicholas and Tu, Stephen and Slotine, Jean-Jacques}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {11528--11549}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/boffi22a/boffi22a.pdf}, url = {https://proceedings.mlr.press/v151/boffi22a.html}, abstract = { Recent numerical experiments have demonstrated that the choice of optimization geometry used during training can impact generalization performance when learning expressive nonlinear model classes such as deep neural networks. These observations have important implications for modern deep learning, but remain poorly understood due to the difficulty of the associated nonconvex optimization. Towards an understanding of this phenomenon, we analyze a family of pseudogradient methods for learning generalized linear models under the square loss – a simplified problem containing both nonlinearity in the model parameters and nonconvexity of the optimization which admits a single neuron as a special case. We prove non-asymptotic bounds on the generalization error that sharply characterize how the interplay between the optimization geometry and the feature space geometry sets the out-of-sample performance of the learned model. Experimentally, selecting the optimization geometry as suggested by our theory leads to improved performance in generalized linear model estimation problems such as nonlinear and nonconvex variants of sparse vector recovery and low-rank matrix sensing. } }
Endnote
%0 Conference Paper %T The role of optimization geometry in single neuron learning %A Nicholas Boffi %A Stephen Tu %A Jean-Jacques Slotine %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-boffi22a %I PMLR %P 11528--11549 %U https://proceedings.mlr.press/v151/boffi22a.html %V 151 %X Recent numerical experiments have demonstrated that the choice of optimization geometry used during training can impact generalization performance when learning expressive nonlinear model classes such as deep neural networks. These observations have important implications for modern deep learning, but remain poorly understood due to the difficulty of the associated nonconvex optimization. Towards an understanding of this phenomenon, we analyze a family of pseudogradient methods for learning generalized linear models under the square loss – a simplified problem containing both nonlinearity in the model parameters and nonconvexity of the optimization which admits a single neuron as a special case. We prove non-asymptotic bounds on the generalization error that sharply characterize how the interplay between the optimization geometry and the feature space geometry sets the out-of-sample performance of the learned model. Experimentally, selecting the optimization geometry as suggested by our theory leads to improved performance in generalized linear model estimation problems such as nonlinear and nonconvex variants of sparse vector recovery and low-rank matrix sensing.
APA
Boffi, N., Tu, S. & Slotine, J.. (2022). The role of optimization geometry in single neuron learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:11528-11549 Available from https://proceedings.mlr.press/v151/boffi22a.html.

Related Material