Deep Bayesian Filter for Bayes-Faithful Data Assimilation

Yuta Tarumi, Keisuke Fukuda, Shin-Ichi Maeda
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:59182-59209, 2025.

Abstract

Data assimilation for nonlinear state space models (SSMs) is inherently challenging due to non-Gaussian posteriors. We propose Deep Bayesian Filtering (DBF), a novel approach to data assimilation in nonlinear SSMs. DBF introduces latent variables $h_t$ in addition to physical variables $z_t$, ensuring Gaussian posteriors by (i) constraining state transitions in the latent space to be linear and (ii) learning a Gaussian inverse observation operator $r(h_t|o_t)$. This structured posterior design enables analytical recursive computation, avoiding the accumulation of Monte Carlo sampling errors over time steps. DBF optimizes these operators and other latent SSM parameters by maximizing the evidence lower bound. Experiments demonstrate that DBF outperforms existing methods in scenarios with highly non-Gaussian posteriors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-tarumi25a, title = {Deep {B}ayesian Filter for {B}ayes-Faithful Data Assimilation}, author = {Tarumi, Yuta and Fukuda, Keisuke and Maeda, Shin-Ichi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {59182--59209}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/tarumi25a/tarumi25a.pdf}, url = {https://proceedings.mlr.press/v267/tarumi25a.html}, abstract = {Data assimilation for nonlinear state space models (SSMs) is inherently challenging due to non-Gaussian posteriors. We propose Deep Bayesian Filtering (DBF), a novel approach to data assimilation in nonlinear SSMs. DBF introduces latent variables $h_t$ in addition to physical variables $z_t$, ensuring Gaussian posteriors by (i) constraining state transitions in the latent space to be linear and (ii) learning a Gaussian inverse observation operator $r(h_t|o_t)$. This structured posterior design enables analytical recursive computation, avoiding the accumulation of Monte Carlo sampling errors over time steps. DBF optimizes these operators and other latent SSM parameters by maximizing the evidence lower bound. Experiments demonstrate that DBF outperforms existing methods in scenarios with highly non-Gaussian posteriors.} }
Endnote
%0 Conference Paper %T Deep Bayesian Filter for Bayes-Faithful Data Assimilation %A Yuta Tarumi %A Keisuke Fukuda %A Shin-Ichi Maeda %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-tarumi25a %I PMLR %P 59182--59209 %U https://proceedings.mlr.press/v267/tarumi25a.html %V 267 %X Data assimilation for nonlinear state space models (SSMs) is inherently challenging due to non-Gaussian posteriors. We propose Deep Bayesian Filtering (DBF), a novel approach to data assimilation in nonlinear SSMs. DBF introduces latent variables $h_t$ in addition to physical variables $z_t$, ensuring Gaussian posteriors by (i) constraining state transitions in the latent space to be linear and (ii) learning a Gaussian inverse observation operator $r(h_t|o_t)$. This structured posterior design enables analytical recursive computation, avoiding the accumulation of Monte Carlo sampling errors over time steps. DBF optimizes these operators and other latent SSM parameters by maximizing the evidence lower bound. Experiments demonstrate that DBF outperforms existing methods in scenarios with highly non-Gaussian posteriors.
APA
Tarumi, Y., Fukuda, K. & Maeda, S.. (2025). Deep Bayesian Filter for Bayes-Faithful Data Assimilation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:59182-59209 Available from https://proceedings.mlr.press/v267/tarumi25a.html.

Related Material