Backward Filtering Forward Deciding in Linear Non-Gaussian State Space Models

Yun-Peng Li, Hans-Andrea Loeliger
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2287-2295, 2024.

Abstract

The paper considers linear state space models with non-Gaussian inputs and/or constraints. As shown previously, NUP representations (normal with unknown parameters) allow to compute MAP estimates in such models by iterating Kalman smoothing recursions. In this paper, we propose to compute such MAP estimates by iterating backward-forward recursions where the forward recursion amounts to coordinatewise input estimation. The advantages of the proposed approach include faster convergence, no “zero-variance stucking”, and easier control of constraint satisfaction. The approach is demonstrated with simulation results of exemplary applications including (i) regression with non-Gaussian priors or constraints on k-th order differences and (ii) control with linearly constrained inputs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-li24j, title = { Backward Filtering Forward Deciding in Linear Non-{G}aussian State Space Models }, author = {Li, Yun-Peng and Loeliger, Hans-Andrea}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2287--2295}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/li24j/li24j.pdf}, url = {https://proceedings.mlr.press/v238/li24j.html}, abstract = { The paper considers linear state space models with non-Gaussian inputs and/or constraints. As shown previously, NUP representations (normal with unknown parameters) allow to compute MAP estimates in such models by iterating Kalman smoothing recursions. In this paper, we propose to compute such MAP estimates by iterating backward-forward recursions where the forward recursion amounts to coordinatewise input estimation. The advantages of the proposed approach include faster convergence, no “zero-variance stucking”, and easier control of constraint satisfaction. The approach is demonstrated with simulation results of exemplary applications including (i) regression with non-Gaussian priors or constraints on k-th order differences and (ii) control with linearly constrained inputs. } }
Endnote
%0 Conference Paper %T Backward Filtering Forward Deciding in Linear Non-Gaussian State Space Models %A Yun-Peng Li %A Hans-Andrea Loeliger %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-li24j %I PMLR %P 2287--2295 %U https://proceedings.mlr.press/v238/li24j.html %V 238 %X The paper considers linear state space models with non-Gaussian inputs and/or constraints. As shown previously, NUP representations (normal with unknown parameters) allow to compute MAP estimates in such models by iterating Kalman smoothing recursions. In this paper, we propose to compute such MAP estimates by iterating backward-forward recursions where the forward recursion amounts to coordinatewise input estimation. The advantages of the proposed approach include faster convergence, no “zero-variance stucking”, and easier control of constraint satisfaction. The approach is demonstrated with simulation results of exemplary applications including (i) regression with non-Gaussian priors or constraints on k-th order differences and (ii) control with linearly constrained inputs.
APA
Li, Y. & Loeliger, H.. (2024). Backward Filtering Forward Deciding in Linear Non-Gaussian State Space Models . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2287-2295 Available from https://proceedings.mlr.press/v238/li24j.html.

Related Material