Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2100-2111, 2021.

Abstract

Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-corenflos21a, title = {Differentiable Particle Filtering via Entropy-Regularized Optimal Transport}, author = {Corenflos, Adrien and Thornton, James and Deligiannidis, George and Doucet, Arnaud}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2100--2111}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/corenflos21a/corenflos21a.pdf}, url = {https://proceedings.mlr.press/v139/corenflos21a.html}, abstract = {Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications.} }
Endnote
%0 Conference Paper %T Differentiable Particle Filtering via Entropy-Regularized Optimal Transport %A Adrien Corenflos %A James Thornton %A George Deligiannidis %A Arnaud Doucet %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-corenflos21a %I PMLR %P 2100--2111 %U https://proceedings.mlr.press/v139/corenflos21a.html %V 139 %X Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models. Resampling is a key ingredient of PF necessary to obtain low variance likelihood and states estimates. However, traditional resampling methods result in PF-based loss functions being non-differentiable with respect to model and PF parameters. In a variational inference context, resampling also yields high variance gradient estimates of the PF-based evidence lower bound. By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. We demonstrate this novel method on a variety of applications.
APA
Corenflos, A., Thornton, J., Deligiannidis, G. & Doucet, A.. (2021). Differentiable Particle Filtering via Entropy-Regularized Optimal Transport. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2100-2111 Available from https://proceedings.mlr.press/v139/corenflos21a.html.

Related Material