On the Implicit Bias of Gradient Descent for Temporal Extrapolation

Edo Cohen-Karlik, Avichai Ben David, Nadav Cohen, Amir Globerson
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:10966-10981, 2022.

Abstract

When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training. This “extrapolating” usage deviates from the traditional statistical learning setup where guarantees are provided under the assumption that train and test distributions are identical. Here we set out to understand when RNNs can extrapolate, focusing on a simple case where the data generating distribution is memoryless. We first show that even with infinite training data, there exist RNN models that interpolate perfectly (i.e., they fit the training data) yet extrapolate poorly to longer sequences. We then show that if gradient descent is used for training, learning will converge to perfect extrapolation under certain assumptions on initialization. Our results complement recent studies on the implicit bias of gradient descent, showing that it plays a key role in extrapolation when learning temporal prediction models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-cohen-karlik22a, title = { On the Implicit Bias of Gradient Descent for Temporal Extrapolation }, author = {Cohen-Karlik, Edo and Ben David, Avichai and Cohen, Nadav and Globerson, Amir}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {10966--10981}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/cohen-karlik22a/cohen-karlik22a.pdf}, url = {https://proceedings.mlr.press/v151/cohen-karlik22a.html}, abstract = { When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training. This “extrapolating” usage deviates from the traditional statistical learning setup where guarantees are provided under the assumption that train and test distributions are identical. Here we set out to understand when RNNs can extrapolate, focusing on a simple case where the data generating distribution is memoryless. We first show that even with infinite training data, there exist RNN models that interpolate perfectly (i.e., they fit the training data) yet extrapolate poorly to longer sequences. We then show that if gradient descent is used for training, learning will converge to perfect extrapolation under certain assumptions on initialization. Our results complement recent studies on the implicit bias of gradient descent, showing that it plays a key role in extrapolation when learning temporal prediction models. } }
Endnote
%0 Conference Paper %T On the Implicit Bias of Gradient Descent for Temporal Extrapolation %A Edo Cohen-Karlik %A Avichai Ben David %A Nadav Cohen %A Amir Globerson %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-cohen-karlik22a %I PMLR %P 10966--10981 %U https://proceedings.mlr.press/v151/cohen-karlik22a.html %V 151 %X When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training. This “extrapolating” usage deviates from the traditional statistical learning setup where guarantees are provided under the assumption that train and test distributions are identical. Here we set out to understand when RNNs can extrapolate, focusing on a simple case where the data generating distribution is memoryless. We first show that even with infinite training data, there exist RNN models that interpolate perfectly (i.e., they fit the training data) yet extrapolate poorly to longer sequences. We then show that if gradient descent is used for training, learning will converge to perfect extrapolation under certain assumptions on initialization. Our results complement recent studies on the implicit bias of gradient descent, showing that it plays a key role in extrapolation when learning temporal prediction models.
APA
Cohen-Karlik, E., Ben David, A., Cohen, N. & Globerson, A.. (2022). On the Implicit Bias of Gradient Descent for Temporal Extrapolation . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:10966-10981 Available from https://proceedings.mlr.press/v151/cohen-karlik22a.html.

Related Material