Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective

Lei Zhao, Mengdi Wang, Yu Bai
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:60957-61020, 2024.

Abstract

Inverse Reinforcement Learning (IRL)—the problem of learning reward functions from demonstrations of an expert policy—plays a critical role in developing intelligent systems. While widely used in applications, theoretical understandings of IRL present unique challenges and remain less developed compared with standard RL. For example, it remains open how to do IRL efficiently in standard offline settings with pre-collected data, where states are obtained from a behavior policy (which could be the expert policy itself), and actions are sampled from the expert policy. This paper provides the first line of results for efficient IRL in vanilla offline and online settings using polynomial samples and runtime. Our algorithms and analyses seamlessly adapt the pessimism principle commonly used in offline RL, and achieve IRL guarantees in stronger metrics than considered in existing work. We provide lower bounds showing that our sample complexities are nearly optimal. As an application, we also show that the learned rewards can transfer to another target MDP with suitable guarantees when the target MDP satisfies certain similarity assumptions with the original (source) MDP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhao24m, title = {Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? {A} Theoretical Perspective}, author = {Zhao, Lei and Wang, Mengdi and Bai, Yu}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {60957--61020}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24m/zhao24m.pdf}, url = {https://proceedings.mlr.press/v235/zhao24m.html}, abstract = {Inverse Reinforcement Learning (IRL)—the problem of learning reward functions from demonstrations of an expert policy—plays a critical role in developing intelligent systems. While widely used in applications, theoretical understandings of IRL present unique challenges and remain less developed compared with standard RL. For example, it remains open how to do IRL efficiently in standard offline settings with pre-collected data, where states are obtained from a behavior policy (which could be the expert policy itself), and actions are sampled from the expert policy. This paper provides the first line of results for efficient IRL in vanilla offline and online settings using polynomial samples and runtime. Our algorithms and analyses seamlessly adapt the pessimism principle commonly used in offline RL, and achieve IRL guarantees in stronger metrics than considered in existing work. We provide lower bounds showing that our sample complexities are nearly optimal. As an application, we also show that the learned rewards can transfer to another target MDP with suitable guarantees when the target MDP satisfies certain similarity assumptions with the original (source) MDP.} }
Endnote
%0 Conference Paper %T Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective %A Lei Zhao %A Mengdi Wang %A Yu Bai %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhao24m %I PMLR %P 60957--61020 %U https://proceedings.mlr.press/v235/zhao24m.html %V 235 %X Inverse Reinforcement Learning (IRL)—the problem of learning reward functions from demonstrations of an expert policy—plays a critical role in developing intelligent systems. While widely used in applications, theoretical understandings of IRL present unique challenges and remain less developed compared with standard RL. For example, it remains open how to do IRL efficiently in standard offline settings with pre-collected data, where states are obtained from a behavior policy (which could be the expert policy itself), and actions are sampled from the expert policy. This paper provides the first line of results for efficient IRL in vanilla offline and online settings using polynomial samples and runtime. Our algorithms and analyses seamlessly adapt the pessimism principle commonly used in offline RL, and achieve IRL guarantees in stronger metrics than considered in existing work. We provide lower bounds showing that our sample complexities are nearly optimal. As an application, we also show that the learned rewards can transfer to another target MDP with suitable guarantees when the target MDP satisfies certain similarity assumptions with the original (source) MDP.
APA
Zhao, L., Wang, M. & Bai, Y.. (2024). Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:60957-61020 Available from https://proceedings.mlr.press/v235/zhao24m.html.

Related Material