Directly Forecasting Belief for Reinforcement Learning with Delays

Qingyuan Wu, Yuhui Wang, Simon Sinong Zhan, Yixuan Wang, Chung-Wei Lin, Chen Lv, Qi Zhu, Jürgen Schmidhuber, Chao Huang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:67810-67832, 2025.

Abstract

Reinforcement learning (RL) with delays is challenging as sensory perceptions lag behind the actual events: the RL agent needs to estimate the real state of its environment based on past observations. State-of-the-art (SOTA) methods typically employ recursive, step-by-step forecasting of states. This can cause the accumulation of compounding errors. To tackle this problem, our novel belief estimation method, named Directly Forecasting Belief Transformer (DFBT), directly forecasts states from observations without incrementally estimating intermediate states step-by-step. We theoretically demonstrate that DFBT greatly reduces compounding errors of existing recursively forecasting methods, yielding stronger performance guarantees. In experiments with D4RL offline datasets, DFBT reduces compounding errors with remarkable prediction accuracy. DFBT’s capability to forecast state sequences also facilitates multi-step bootstrapping, thus greatly improving learning efficiency. On the MuJoCo benchmark, our DFBT-based method substantially outperforms SOTA baselines. Code is available at https://github.com/QingyuanWuNothing/DFBT.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wu25ag, title = {Directly Forecasting Belief for Reinforcement Learning with Delays}, author = {Wu, Qingyuan and Wang, Yuhui and Zhan, Simon Sinong and Wang, Yixuan and Lin, Chung-Wei and Lv, Chen and Zhu, Qi and Schmidhuber, J\"{u}rgen and Huang, Chao}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {67810--67832}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wu25ag/wu25ag.pdf}, url = {https://proceedings.mlr.press/v267/wu25ag.html}, abstract = {Reinforcement learning (RL) with delays is challenging as sensory perceptions lag behind the actual events: the RL agent needs to estimate the real state of its environment based on past observations. State-of-the-art (SOTA) methods typically employ recursive, step-by-step forecasting of states. This can cause the accumulation of compounding errors. To tackle this problem, our novel belief estimation method, named Directly Forecasting Belief Transformer (DFBT), directly forecasts states from observations without incrementally estimating intermediate states step-by-step. We theoretically demonstrate that DFBT greatly reduces compounding errors of existing recursively forecasting methods, yielding stronger performance guarantees. In experiments with D4RL offline datasets, DFBT reduces compounding errors with remarkable prediction accuracy. DFBT’s capability to forecast state sequences also facilitates multi-step bootstrapping, thus greatly improving learning efficiency. On the MuJoCo benchmark, our DFBT-based method substantially outperforms SOTA baselines. Code is available at https://github.com/QingyuanWuNothing/DFBT.} }
Endnote
%0 Conference Paper %T Directly Forecasting Belief for Reinforcement Learning with Delays %A Qingyuan Wu %A Yuhui Wang %A Simon Sinong Zhan %A Yixuan Wang %A Chung-Wei Lin %A Chen Lv %A Qi Zhu %A Jürgen Schmidhuber %A Chao Huang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wu25ag %I PMLR %P 67810--67832 %U https://proceedings.mlr.press/v267/wu25ag.html %V 267 %X Reinforcement learning (RL) with delays is challenging as sensory perceptions lag behind the actual events: the RL agent needs to estimate the real state of its environment based on past observations. State-of-the-art (SOTA) methods typically employ recursive, step-by-step forecasting of states. This can cause the accumulation of compounding errors. To tackle this problem, our novel belief estimation method, named Directly Forecasting Belief Transformer (DFBT), directly forecasts states from observations without incrementally estimating intermediate states step-by-step. We theoretically demonstrate that DFBT greatly reduces compounding errors of existing recursively forecasting methods, yielding stronger performance guarantees. In experiments with D4RL offline datasets, DFBT reduces compounding errors with remarkable prediction accuracy. DFBT’s capability to forecast state sequences also facilitates multi-step bootstrapping, thus greatly improving learning efficiency. On the MuJoCo benchmark, our DFBT-based method substantially outperforms SOTA baselines. Code is available at https://github.com/QingyuanWuNothing/DFBT.
APA
Wu, Q., Wang, Y., Zhan, S.S., Wang, Y., Lin, C., Lv, C., Zhu, Q., Schmidhuber, J. & Huang, C.. (2025). Directly Forecasting Belief for Reinforcement Learning with Delays. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:67810-67832 Available from https://proceedings.mlr.press/v267/wu25ag.html.

Related Material