Efficient Duple Perturbation Robustness in Low-rank MDPs

Yang Hu, Haitong Ma, Na Li, Bo Dai
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:723-737, 2025.

Abstract

The pursuit of robustness has recently been a popular topic in reinforcement learning (RL) research, yet the existing methods generally suffer from computation issues that obstruct their real-world implementation. In this paper, we consider MDPs with low-rank structures, where the transition kernel can be written as a linear product of feature map and factors. We introduce *duple perturbation* robustness, i.e. perturbation on both the feature map and the factors, via a novel characterization of $(\xi,\eta)$-ambiguity sets featuring computational efficiency. Our novel low-rank robust MDP formulation is compatible with the low-rank function representation view, and therefore, is naturally applicable to practical RL problems with large or even continuous state-action spaces. Meanwhile, it also gives rise to a provably efficient and practical algorithm with theoretical convergence rate guarantee. Lastly, the robustness of our proposed approach is justified by numerical experiments, including classical control tasks with continuous state-action spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-hu25b, title = {Efficient Duple Perturbation Robustness in Low-rank MDPs}, author = {Hu, Yang and Ma, Haitong and Li, Na and Dai, Bo}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {723--737}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/hu25b/hu25b.pdf}, url = {https://proceedings.mlr.press/v283/hu25b.html}, abstract = {The pursuit of robustness has recently been a popular topic in reinforcement learning (RL) research, yet the existing methods generally suffer from computation issues that obstruct their real-world implementation. In this paper, we consider MDPs with low-rank structures, where the transition kernel can be written as a linear product of feature map and factors. We introduce *duple perturbation* robustness, i.e. perturbation on both the feature map and the factors, via a novel characterization of $(\xi,\eta)$-ambiguity sets featuring computational efficiency. Our novel low-rank robust MDP formulation is compatible with the low-rank function representation view, and therefore, is naturally applicable to practical RL problems with large or even continuous state-action spaces. Meanwhile, it also gives rise to a provably efficient and practical algorithm with theoretical convergence rate guarantee. Lastly, the robustness of our proposed approach is justified by numerical experiments, including classical control tasks with continuous state-action spaces.} }
Endnote
%0 Conference Paper %T Efficient Duple Perturbation Robustness in Low-rank MDPs %A Yang Hu %A Haitong Ma %A Na Li %A Bo Dai %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-hu25b %I PMLR %P 723--737 %U https://proceedings.mlr.press/v283/hu25b.html %V 283 %X The pursuit of robustness has recently been a popular topic in reinforcement learning (RL) research, yet the existing methods generally suffer from computation issues that obstruct their real-world implementation. In this paper, we consider MDPs with low-rank structures, where the transition kernel can be written as a linear product of feature map and factors. We introduce *duple perturbation* robustness, i.e. perturbation on both the feature map and the factors, via a novel characterization of $(\xi,\eta)$-ambiguity sets featuring computational efficiency. Our novel low-rank robust MDP formulation is compatible with the low-rank function representation view, and therefore, is naturally applicable to practical RL problems with large or even continuous state-action spaces. Meanwhile, it also gives rise to a provably efficient and practical algorithm with theoretical convergence rate guarantee. Lastly, the robustness of our proposed approach is justified by numerical experiments, including classical control tasks with continuous state-action spaces.
APA
Hu, Y., Ma, H., Li, N. & Dai, B.. (2025). Efficient Duple Perturbation Robustness in Low-rank MDPs. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:723-737 Available from https://proceedings.mlr.press/v283/hu25b.html.

Related Material