Learning Recurrent Neural Net Models of Nonlinear Systems

Joshua Hanson, Maxim Raginsky, Eduardo Sontag
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:425-435, 2021.

Abstract

We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order, we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-hanson21a, title = {Learning Recurrent Neural Net Models of Nonlinear Systems}, author = {Hanson, Joshua and Raginsky, Maxim and Sontag, Eduardo}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {425--435}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/hanson21a/hanson21a.pdf}, url = {https://proceedings.mlr.press/v144/hanson21a.html}, abstract = {We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order, we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.} }
Endnote
%0 Conference Paper %T Learning Recurrent Neural Net Models of Nonlinear Systems %A Joshua Hanson %A Maxim Raginsky %A Eduardo Sontag %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-hanson21a %I PMLR %P 425--435 %U https://proceedings.mlr.press/v144/hanson21a.html %V 144 %X We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order, we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.
APA
Hanson, J., Raginsky, M. & Sontag, E.. (2021). Learning Recurrent Neural Net Models of Nonlinear Systems. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:425-435 Available from https://proceedings.mlr.press/v144/hanson21a.html.

Related Material