On Monotonic Linear Interpolation of Neural Network Parameters

James R Lucas, Juhan Bae, Michael R Zhang, Stanislav Fort, Richard Zemel, Roger B Grosse
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7168-7179, 2021.

Abstract

Linear interpolation between initial neural network parameters and converged parameters after training with stochastic gradient descent (SGD) typically leads to a monotonic decrease in the training objective. This Monotonic Linear Interpolation (MLI) property, first observed by Goodfellow et al. 2014, persists in spite of the non-convex objectives and highly non-linear training dynamics of neural networks. Extending this work, we evaluate several hypotheses for this property that, to our knowledge, have not yet been explored. Using tools from differential geometry, we draw connections between the interpolated paths in function space and the monotonicity of the network — providing sufficient conditions for the MLI property under mean squared error. While the MLI property holds under various settings (e.g., network architectures and learning problems), we show in practice that networks violating the MLI property can be produced systematically, by encouraging the weights to move far from initialization. The MLI property raises important questions about the loss landscape geometry of neural networks and highlights the need to further study their global properties.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-lucas21a, title = {On Monotonic Linear Interpolation of Neural Network Parameters}, author = {Lucas, James R and Bae, Juhan and Zhang, Michael R and Fort, Stanislav and Zemel, Richard and Grosse, Roger B}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7168--7179}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/lucas21a/lucas21a.pdf}, url = {https://proceedings.mlr.press/v139/lucas21a.html}, abstract = {Linear interpolation between initial neural network parameters and converged parameters after training with stochastic gradient descent (SGD) typically leads to a monotonic decrease in the training objective. This Monotonic Linear Interpolation (MLI) property, first observed by Goodfellow et al. 2014, persists in spite of the non-convex objectives and highly non-linear training dynamics of neural networks. Extending this work, we evaluate several hypotheses for this property that, to our knowledge, have not yet been explored. Using tools from differential geometry, we draw connections between the interpolated paths in function space and the monotonicity of the network — providing sufficient conditions for the MLI property under mean squared error. While the MLI property holds under various settings (e.g., network architectures and learning problems), we show in practice that networks violating the MLI property can be produced systematically, by encouraging the weights to move far from initialization. The MLI property raises important questions about the loss landscape geometry of neural networks and highlights the need to further study their global properties.} }
Endnote
%0 Conference Paper %T On Monotonic Linear Interpolation of Neural Network Parameters %A James R Lucas %A Juhan Bae %A Michael R Zhang %A Stanislav Fort %A Richard Zemel %A Roger B Grosse %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-lucas21a %I PMLR %P 7168--7179 %U https://proceedings.mlr.press/v139/lucas21a.html %V 139 %X Linear interpolation between initial neural network parameters and converged parameters after training with stochastic gradient descent (SGD) typically leads to a monotonic decrease in the training objective. This Monotonic Linear Interpolation (MLI) property, first observed by Goodfellow et al. 2014, persists in spite of the non-convex objectives and highly non-linear training dynamics of neural networks. Extending this work, we evaluate several hypotheses for this property that, to our knowledge, have not yet been explored. Using tools from differential geometry, we draw connections between the interpolated paths in function space and the monotonicity of the network — providing sufficient conditions for the MLI property under mean squared error. While the MLI property holds under various settings (e.g., network architectures and learning problems), we show in practice that networks violating the MLI property can be produced systematically, by encouraging the weights to move far from initialization. The MLI property raises important questions about the loss landscape geometry of neural networks and highlights the need to further study their global properties.
APA
Lucas, J.R., Bae, J., Zhang, M.R., Fort, S., Zemel, R. & Grosse, R.B.. (2021). On Monotonic Linear Interpolation of Neural Network Parameters. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7168-7179 Available from https://proceedings.mlr.press/v139/lucas21a.html.

Related Material