On Generalization Bounds of a Family of Recurrent Neural Networks

Minshuo Chen, Xingguo Li, Tuo Zhao
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1233-1243, 2020.

Abstract

Recurrent Neural Networks (RNNs) have been widely applied to sequential data analysis. Due to their complicated modeling structures, however, the theory behind is still largely missing. To connect theory and practice, we study the generalization properties of vanilla RNNs as well as their variants, including Minimal Gated Unit (MGU), Long Short Term Memory (LSTM), and Convolutional (Conv) RNNs. Specifically, our theory is established under the PAC-Learning framework. The generalization bound is presented in terms of the spectral norms of the weight matrices and the total number of parameters. We also establish refined generalization bounds with additional norm assumptions, and draw a comparison among these bounds. We remark: (1) Our generalization bound for vanilla RNNs is significantly tighter than the best of existing results; (2) We are not aware of any other generalization bounds for MGU and LSTM RNNs in the exiting literature; (3) We demonstrate the advantages of these variants in generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-chen20d, title = {On Generalization Bounds of a Family of Recurrent Neural Networks}, author = {Chen, Minshuo and Li, Xingguo and Zhao, Tuo}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1233--1243}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/chen20d/chen20d.pdf}, url = {https://proceedings.mlr.press/v108/chen20d.html}, abstract = {Recurrent Neural Networks (RNNs) have been widely applied to sequential data analysis. Due to their complicated modeling structures, however, the theory behind is still largely missing. To connect theory and practice, we study the generalization properties of vanilla RNNs as well as their variants, including Minimal Gated Unit (MGU), Long Short Term Memory (LSTM), and Convolutional (Conv) RNNs. Specifically, our theory is established under the PAC-Learning framework. The generalization bound is presented in terms of the spectral norms of the weight matrices and the total number of parameters. We also establish refined generalization bounds with additional norm assumptions, and draw a comparison among these bounds. We remark: (1) Our generalization bound for vanilla RNNs is significantly tighter than the best of existing results; (2) We are not aware of any other generalization bounds for MGU and LSTM RNNs in the exiting literature; (3) We demonstrate the advantages of these variants in generalization.} }
Endnote
%0 Conference Paper %T On Generalization Bounds of a Family of Recurrent Neural Networks %A Minshuo Chen %A Xingguo Li %A Tuo Zhao %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-chen20d %I PMLR %P 1233--1243 %U https://proceedings.mlr.press/v108/chen20d.html %V 108 %X Recurrent Neural Networks (RNNs) have been widely applied to sequential data analysis. Due to their complicated modeling structures, however, the theory behind is still largely missing. To connect theory and practice, we study the generalization properties of vanilla RNNs as well as their variants, including Minimal Gated Unit (MGU), Long Short Term Memory (LSTM), and Convolutional (Conv) RNNs. Specifically, our theory is established under the PAC-Learning framework. The generalization bound is presented in terms of the spectral norms of the weight matrices and the total number of parameters. We also establish refined generalization bounds with additional norm assumptions, and draw a comparison among these bounds. We remark: (1) Our generalization bound for vanilla RNNs is significantly tighter than the best of existing results; (2) We are not aware of any other generalization bounds for MGU and LSTM RNNs in the exiting literature; (3) We demonstrate the advantages of these variants in generalization.
APA
Chen, M., Li, X. & Zhao, T.. (2020). On Generalization Bounds of a Family of Recurrent Neural Networks. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1233-1243 Available from https://proceedings.mlr.press/v108/chen20d.html.

Related Material