Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays

Qingyuan Wu, Simon Sinong Zhan, Yixuan Wang, Yuhui Wang, Chung-Wei Lin, Chen Lv, Qi Zhu, Jürgen Schmidhuber, Chao Huang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:53973-53998, 2024.

Abstract

Reinforcement learning (RL) is challenging in the common case of delays between events and their sensory perceptions. State-of-the-art (SOTA) state augmentation techniques either suffer from state space explosion or performance degeneration in stochastic environments. To address these challenges, we present a novel Auxiliary-Delayed Reinforcement Learning (AD-RL) method that leverages auxiliary tasks involving short delays to accelerate RL with long delays, without compromising performance in stochastic environments. Specifically, AD-RL learns a value function for short delays and uses bootstrapping and policy improvement techniques to adjust it for long delays. We theoretically show that this can greatly reduce the sample complexity. On deterministic and stochastic benchmarks, our method significantly outperforms the SOTAs in both sample efficiency and policy performance. Code is available at https://github.com/QingyuanWuNothing/AD-RL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wu24af, title = {Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays}, author = {Wu, Qingyuan and Zhan, Simon Sinong and Wang, Yixuan and Wang, Yuhui and Lin, Chung-Wei and Lv, Chen and Zhu, Qi and Schmidhuber, J\"{u}rgen and Huang, Chao}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {53973--53998}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wu24af/wu24af.pdf}, url = {https://proceedings.mlr.press/v235/wu24af.html}, abstract = {Reinforcement learning (RL) is challenging in the common case of delays between events and their sensory perceptions. State-of-the-art (SOTA) state augmentation techniques either suffer from state space explosion or performance degeneration in stochastic environments. To address these challenges, we present a novel Auxiliary-Delayed Reinforcement Learning (AD-RL) method that leverages auxiliary tasks involving short delays to accelerate RL with long delays, without compromising performance in stochastic environments. Specifically, AD-RL learns a value function for short delays and uses bootstrapping and policy improvement techniques to adjust it for long delays. We theoretically show that this can greatly reduce the sample complexity. On deterministic and stochastic benchmarks, our method significantly outperforms the SOTAs in both sample efficiency and policy performance. Code is available at https://github.com/QingyuanWuNothing/AD-RL.} }
Endnote
%0 Conference Paper %T Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays %A Qingyuan Wu %A Simon Sinong Zhan %A Yixuan Wang %A Yuhui Wang %A Chung-Wei Lin %A Chen Lv %A Qi Zhu %A Jürgen Schmidhuber %A Chao Huang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wu24af %I PMLR %P 53973--53998 %U https://proceedings.mlr.press/v235/wu24af.html %V 235 %X Reinforcement learning (RL) is challenging in the common case of delays between events and their sensory perceptions. State-of-the-art (SOTA) state augmentation techniques either suffer from state space explosion or performance degeneration in stochastic environments. To address these challenges, we present a novel Auxiliary-Delayed Reinforcement Learning (AD-RL) method that leverages auxiliary tasks involving short delays to accelerate RL with long delays, without compromising performance in stochastic environments. Specifically, AD-RL learns a value function for short delays and uses bootstrapping and policy improvement techniques to adjust it for long delays. We theoretically show that this can greatly reduce the sample complexity. On deterministic and stochastic benchmarks, our method significantly outperforms the SOTAs in both sample efficiency and policy performance. Code is available at https://github.com/QingyuanWuNothing/AD-RL.
APA
Wu, Q., Zhan, S.S., Wang, Y., Wang, Y., Lin, C., Lv, C., Zhu, Q., Schmidhuber, J. & Huang, C.. (2024). Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:53973-53998 Available from https://proceedings.mlr.press/v235/wu24af.html.

Related Material