Bilevel Reinforcement Learning via the Development of Hyper-gradient without Lower-Level Convexity

Yan Yang, Bin Gao, Ya-xiang Yuan
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4780-4788, 2025.

Abstract

Bilevel reinforcement learning (RL), which features intertwined two-level problems, has attracted growing interest recently. The inherent non-convexity of the lower-level RL problem is, however, to be an impediment to developing bilevel optimization methods. By employing the fixed point equation associated with the regularized RL, we characterize the hyper-gradient via fully first-order information, thus circumventing the assumption of lower-level convexity. This, remarkably, distinguishes our development of hyper-gradient from the general AID-based bilevel frameworks since we take advantage of the specific structure of RL problems. Moreover, we design both model-based and model-free bilevel reinforcement learning algorithms, facilitated by access to the fully first-order hyper-gradient. Both algorithms enjoy the convergence rate $\mathcal{O}\left(\epsilon^{-1}\right)$. To extend the applicability, a stochastic version of the model-free algorithm is proposed, along with results on its convergence rate and sampling complexity. In addition, numerical experiments demonstrate that the hyper-gradient indeed serves as an integration of exploitation and exploration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-yang25g, title = {Bilevel Reinforcement Learning via the Development of Hyper-gradient without Lower-Level Convexity}, author = {Yang, Yan and Gao, Bin and Yuan, Ya-xiang}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4780--4788}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/yang25g/yang25g.pdf}, url = {https://proceedings.mlr.press/v258/yang25g.html}, abstract = {Bilevel reinforcement learning (RL), which features intertwined two-level problems, has attracted growing interest recently. The inherent non-convexity of the lower-level RL problem is, however, to be an impediment to developing bilevel optimization methods. By employing the fixed point equation associated with the regularized RL, we characterize the hyper-gradient via fully first-order information, thus circumventing the assumption of lower-level convexity. This, remarkably, distinguishes our development of hyper-gradient from the general AID-based bilevel frameworks since we take advantage of the specific structure of RL problems. Moreover, we design both model-based and model-free bilevel reinforcement learning algorithms, facilitated by access to the fully first-order hyper-gradient. Both algorithms enjoy the convergence rate $\mathcal{O}\left(\epsilon^{-1}\right)$. To extend the applicability, a stochastic version of the model-free algorithm is proposed, along with results on its convergence rate and sampling complexity. In addition, numerical experiments demonstrate that the hyper-gradient indeed serves as an integration of exploitation and exploration.} }
Endnote
%0 Conference Paper %T Bilevel Reinforcement Learning via the Development of Hyper-gradient without Lower-Level Convexity %A Yan Yang %A Bin Gao %A Ya-xiang Yuan %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-yang25g %I PMLR %P 4780--4788 %U https://proceedings.mlr.press/v258/yang25g.html %V 258 %X Bilevel reinforcement learning (RL), which features intertwined two-level problems, has attracted growing interest recently. The inherent non-convexity of the lower-level RL problem is, however, to be an impediment to developing bilevel optimization methods. By employing the fixed point equation associated with the regularized RL, we characterize the hyper-gradient via fully first-order information, thus circumventing the assumption of lower-level convexity. This, remarkably, distinguishes our development of hyper-gradient from the general AID-based bilevel frameworks since we take advantage of the specific structure of RL problems. Moreover, we design both model-based and model-free bilevel reinforcement learning algorithms, facilitated by access to the fully first-order hyper-gradient. Both algorithms enjoy the convergence rate $\mathcal{O}\left(\epsilon^{-1}\right)$. To extend the applicability, a stochastic version of the model-free algorithm is proposed, along with results on its convergence rate and sampling complexity. In addition, numerical experiments demonstrate that the hyper-gradient indeed serves as an integration of exploitation and exploration.
APA
Yang, Y., Gao, B. & Yuan, Y.. (2025). Bilevel Reinforcement Learning via the Development of Hyper-gradient without Lower-Level Convexity. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4780-4788 Available from https://proceedings.mlr.press/v258/yang25g.html.

Related Material