Safe Offline Reinforcement Learning with Real-Time Budget Constraints

Qian Lin, Bo Tang, Zifan Wu, Chao Yu, Shangqin Mao, Qianlong Xie, Xingxing Wang, Dong Wang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:21127-21152, 2023.

Abstract

Aiming at promoting the safe real-world deployment of Reinforcement Learning (RL), research on safe RL has made significant progress in recent years. However, most existing works in the literature still focus on the online setting where risky violations of the safety budget are likely to be incurred during training. Besides, in many realworld applications, the learned policy is required to respond to dynamically determined safety budgets (i.e., constraint threshold) in real time. In this paper, we target at the above real-time budget constraint problem under the offline setting, and propose Trajectory-based REal-time Budget Inference (TREBI) as a novel solution that approaches this problem from the perspective of trajectory distribution. Theoretically, we prove an error bound of the estimation on the episodic reward and cost under the offline setting and thus provide a performance guarantee for TREBI. Empirical results on a wide range of simulation tasks and a real-world large-scale advertising application demonstrate the capability of TREBI in solving real-time budget constraint problems under offline settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-lin23h, title = {Safe Offline Reinforcement Learning with Real-Time Budget Constraints}, author = {Lin, Qian and Tang, Bo and Wu, Zifan and Yu, Chao and Mao, Shangqin and Xie, Qianlong and Wang, Xingxing and Wang, Dong}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {21127--21152}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/lin23h/lin23h.pdf}, url = {https://proceedings.mlr.press/v202/lin23h.html}, abstract = {Aiming at promoting the safe real-world deployment of Reinforcement Learning (RL), research on safe RL has made significant progress in recent years. However, most existing works in the literature still focus on the online setting where risky violations of the safety budget are likely to be incurred during training. Besides, in many realworld applications, the learned policy is required to respond to dynamically determined safety budgets (i.e., constraint threshold) in real time. In this paper, we target at the above real-time budget constraint problem under the offline setting, and propose Trajectory-based REal-time Budget Inference (TREBI) as a novel solution that approaches this problem from the perspective of trajectory distribution. Theoretically, we prove an error bound of the estimation on the episodic reward and cost under the offline setting and thus provide a performance guarantee for TREBI. Empirical results on a wide range of simulation tasks and a real-world large-scale advertising application demonstrate the capability of TREBI in solving real-time budget constraint problems under offline settings.} }
Endnote
%0 Conference Paper %T Safe Offline Reinforcement Learning with Real-Time Budget Constraints %A Qian Lin %A Bo Tang %A Zifan Wu %A Chao Yu %A Shangqin Mao %A Qianlong Xie %A Xingxing Wang %A Dong Wang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-lin23h %I PMLR %P 21127--21152 %U https://proceedings.mlr.press/v202/lin23h.html %V 202 %X Aiming at promoting the safe real-world deployment of Reinforcement Learning (RL), research on safe RL has made significant progress in recent years. However, most existing works in the literature still focus on the online setting where risky violations of the safety budget are likely to be incurred during training. Besides, in many realworld applications, the learned policy is required to respond to dynamically determined safety budgets (i.e., constraint threshold) in real time. In this paper, we target at the above real-time budget constraint problem under the offline setting, and propose Trajectory-based REal-time Budget Inference (TREBI) as a novel solution that approaches this problem from the perspective of trajectory distribution. Theoretically, we prove an error bound of the estimation on the episodic reward and cost under the offline setting and thus provide a performance guarantee for TREBI. Empirical results on a wide range of simulation tasks and a real-world large-scale advertising application demonstrate the capability of TREBI in solving real-time budget constraint problems under offline settings.
APA
Lin, Q., Tang, B., Wu, Z., Yu, C., Mao, S., Xie, Q., Wang, X. & Wang, D.. (2023). Safe Offline Reinforcement Learning with Real-Time Budget Constraints. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:21127-21152 Available from https://proceedings.mlr.press/v202/lin23h.html.

Related Material