Active Learning for Identification of Linear Dynamical Systems

Andrew Wagenmaker, Kevin Jamieson
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:3487-3582, 2020.

Abstract

We propose an algorithm to actively estimate the parameters of a linear dynamical system. Given complete control over the system’s input, our algorithm adaptively chooses the inputs to accelerate estimation. We show a finite time bound quantifying the estimation rate our algorithm attains and prove matching upper and lower bounds which guarantee its asymptotic optimality, up to constants. In addition, we show that this optimal rate is unattainable when using Gaussian noise to excite the system, even with optimally tuned covariance, and analyze several examples where our algorithm provably improves over rates obtained by playing noise. Our analysis critically relies on a novel result quantifying the error in estimating the parameters of a dynamical system when arbitrary periodic inputs are being played. We conclude with numerical examples that illustrate the effectiveness of our algorithm in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-wagenmaker20a, title = {Active Learning for Identification of Linear Dynamical Systems}, author = {Wagenmaker, Andrew and Jamieson, Kevin}, pages = {3487--3582}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/wagenmaker20a/wagenmaker20a.pdf}, url = {http://proceedings.mlr.press/v125/wagenmaker20a.html}, abstract = { We propose an algorithm to actively estimate the parameters of a linear dynamical system. Given complete control over the system’s input, our algorithm adaptively chooses the inputs to accelerate estimation. We show a finite time bound quantifying the estimation rate our algorithm attains and prove matching upper and lower bounds which guarantee its asymptotic optimality, up to constants. In addition, we show that this optimal rate is unattainable when using Gaussian noise to excite the system, even with optimally tuned covariance, and analyze several examples where our algorithm provably improves over rates obtained by playing noise. Our analysis critically relies on a novel result quantifying the error in estimating the parameters of a dynamical system when arbitrary periodic inputs are being played. We conclude with numerical examples that illustrate the effectiveness of our algorithm in practice. } }
Endnote
%0 Conference Paper %T Active Learning for Identification of Linear Dynamical Systems %A Andrew Wagenmaker %A Kevin Jamieson %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-wagenmaker20a %I PMLR %J Proceedings of Machine Learning Research %P 3487--3582 %U http://proceedings.mlr.press %V 125 %W PMLR %X We propose an algorithm to actively estimate the parameters of a linear dynamical system. Given complete control over the system’s input, our algorithm adaptively chooses the inputs to accelerate estimation. We show a finite time bound quantifying the estimation rate our algorithm attains and prove matching upper and lower bounds which guarantee its asymptotic optimality, up to constants. In addition, we show that this optimal rate is unattainable when using Gaussian noise to excite the system, even with optimally tuned covariance, and analyze several examples where our algorithm provably improves over rates obtained by playing noise. Our analysis critically relies on a novel result quantifying the error in estimating the parameters of a dynamical system when arbitrary periodic inputs are being played. We conclude with numerical examples that illustrate the effectiveness of our algorithm in practice.
APA
Wagenmaker, A. & Jamieson, K.. (2020). Active Learning for Identification of Linear Dynamical Systems. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:3487-3582

Related Material