Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems

Scott Linderman, Matthew Johnson, Andrew Miller, Ryan Adams, David Blei, Liam Paninski
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:914-922, 2017.

Abstract

Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics. We can gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamic units. Building on switching linear dynamical systems (SLDS), we develop a model class and Bayesian inference algorithms that not only discover these dynamical units but also, by learning how transition probabilities depend on observations or continuous latent states, explain their switching behavior. Our key innovation is to design these recurrent SLDS models to enable recent Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-linderman17a, title = {{Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems}}, author = {Linderman, Scott and Johnson, Matthew and Miller, Andrew and Adams, Ryan and Blei, David and Paninski, Liam}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {914--922}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/linderman17a/linderman17a.pdf}, url = {https://proceedings.mlr.press/v54/linderman17a.html}, abstract = {Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics. We can gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamic units. Building on switching linear dynamical systems (SLDS), we develop a model class and Bayesian inference algorithms that not only discover these dynamical units but also, by learning how transition probabilities depend on observations or continuous latent states, explain their switching behavior. Our key innovation is to design these recurrent SLDS models to enable recent Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.} }
Endnote
%0 Conference Paper %T Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems %A Scott Linderman %A Matthew Johnson %A Andrew Miller %A Ryan Adams %A David Blei %A Liam Paninski %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-linderman17a %I PMLR %P 914--922 %U https://proceedings.mlr.press/v54/linderman17a.html %V 54 %X Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics. We can gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamic units. Building on switching linear dynamical systems (SLDS), we develop a model class and Bayesian inference algorithms that not only discover these dynamical units but also, by learning how transition probabilities depend on observations or continuous latent states, explain their switching behavior. Our key innovation is to design these recurrent SLDS models to enable recent Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.
APA
Linderman, S., Johnson, M., Miller, A., Adams, R., Blei, D. & Paninski, L.. (2017). Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:914-922 Available from https://proceedings.mlr.press/v54/linderman17a.html.

Related Material