Variational Marginal Particle Filters

Jinlin Lai, Justin Domke, Daniel Sheldon
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:875-895, 2022.

Abstract

Variational inference for state space models (SSMs) is known to be hard in general. Recent works focus on deriving variational objectives for SSMs from unbiased sequential Monte Carlo estimators. We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced variance and differentiability. We propose the variational marginal particle filter (VMPF), which is a differentiable and reparameterizable variational filtering objective for SSMs based on an unbiased estimator. We find that VMPF with biased gradients gives tighter bounds than previous objectives, and the unbiased reparameterization gradients are sometimes beneficial.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-lai22a, title = { Variational Marginal Particle Filters }, author = {Lai, Jinlin and Domke, Justin and Sheldon, Daniel}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {875--895}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/lai22a/lai22a.pdf}, url = {https://proceedings.mlr.press/v151/lai22a.html}, abstract = { Variational inference for state space models (SSMs) is known to be hard in general. Recent works focus on deriving variational objectives for SSMs from unbiased sequential Monte Carlo estimators. We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced variance and differentiability. We propose the variational marginal particle filter (VMPF), which is a differentiable and reparameterizable variational filtering objective for SSMs based on an unbiased estimator. We find that VMPF with biased gradients gives tighter bounds than previous objectives, and the unbiased reparameterization gradients are sometimes beneficial. } }
Endnote
%0 Conference Paper %T Variational Marginal Particle Filters %A Jinlin Lai %A Justin Domke %A Daniel Sheldon %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-lai22a %I PMLR %P 875--895 %U https://proceedings.mlr.press/v151/lai22a.html %V 151 %X Variational inference for state space models (SSMs) is known to be hard in general. Recent works focus on deriving variational objectives for SSMs from unbiased sequential Monte Carlo estimators. We reveal that the marginal particle filter is obtained from sequential Monte Carlo by applying Rao-Blackwellization operations, which sacrifices the trajectory information for reduced variance and differentiability. We propose the variational marginal particle filter (VMPF), which is a differentiable and reparameterizable variational filtering objective for SSMs based on an unbiased estimator. We find that VMPF with biased gradients gives tighter bounds than previous objectives, and the unbiased reparameterization gradients are sometimes beneficial.
APA
Lai, J., Domke, J. & Sheldon, D.. (2022). Variational Marginal Particle Filters . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:875-895 Available from https://proceedings.mlr.press/v151/lai22a.html.

Related Material