The neural moving average model for scalable variational inference of state space models

Thomas Ryder, Dennis Prangle, Andrew Golightly, Isaac Matthews
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:12-22, 2021.

Abstract

Variational inference has had great success in scaling approximate Bayesian inference to big data by exploiting mini-batch training. To date, however, this strategy has been most applicable to models of independent data. We propose an extension to state space models of time series data based on a novel generative model for latent temporal states: the neural moving average model. This permits a subsequence to be sampled without drawing from the entire distribution, enabling training iterations to use mini-batches of the time series at low computational cost. We illustrate our method on autoregressive, Lotka-Volterra, FitzHugh-Nagumo and stochastic volatility models, achieving accurate parameter estimation in a short time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-ryder21a, title = {The neural moving average model for scalable variational inference of state space models}, author = {Ryder, Thomas and Prangle, Dennis and Golightly, Andrew and Matthews, Isaac}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {12--22}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/ryder21a/ryder21a.pdf}, url = {https://proceedings.mlr.press/v161/ryder21a.html}, abstract = {Variational inference has had great success in scaling approximate Bayesian inference to big data by exploiting mini-batch training. To date, however, this strategy has been most applicable to models of independent data. We propose an extension to state space models of time series data based on a novel generative model for latent temporal states: the neural moving average model. This permits a subsequence to be sampled without drawing from the entire distribution, enabling training iterations to use mini-batches of the time series at low computational cost. We illustrate our method on autoregressive, Lotka-Volterra, FitzHugh-Nagumo and stochastic volatility models, achieving accurate parameter estimation in a short time.} }
Endnote
%0 Conference Paper %T The neural moving average model for scalable variational inference of state space models %A Thomas Ryder %A Dennis Prangle %A Andrew Golightly %A Isaac Matthews %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-ryder21a %I PMLR %P 12--22 %U https://proceedings.mlr.press/v161/ryder21a.html %V 161 %X Variational inference has had great success in scaling approximate Bayesian inference to big data by exploiting mini-batch training. To date, however, this strategy has been most applicable to models of independent data. We propose an extension to state space models of time series data based on a novel generative model for latent temporal states: the neural moving average model. This permits a subsequence to be sampled without drawing from the entire distribution, enabling training iterations to use mini-batches of the time series at low computational cost. We illustrate our method on autoregressive, Lotka-Volterra, FitzHugh-Nagumo and stochastic volatility models, achieving accurate parameter estimation in a short time.
APA
Ryder, T., Prangle, D., Golightly, A. & Matthews, I.. (2021). The neural moving average model for scalable variational inference of state space models. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:12-22 Available from https://proceedings.mlr.press/v161/ryder21a.html.

Related Material