Learning true objectives: Linear algebraic characterizations of identifiability in inverse reinforcement learning

Mohamad Louai Shehab, Antoine Aspeel, Nikos Arechiga, Andrew Best, Necmiye Ozay
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1266-1277, 2024.

Abstract

Inverse reinforcement Learning (IRL) has emerged as a powerful paradigm for extracting expert skills from observed behavior, with applications ranging from autonomous systems to humanrobot interaction. However, the identifiability issue within IRL poses a significant challenge, as multiple reward functions can explain the same observed behavior. This paper provides a linear algebraic characterization of several identifiability notions for an entropy-regularized finite horizon Markov decision process (MDP). Moreover, our approach allows for the seamless integration of prior knowledge, in the form of featurized reward functions, to enhance the identifiability of IRL problems. The results are demonstrated with experiments on a grid world environment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-shehab24a, title = {Learning true objectives: {L}inear algebraic characterizations of identifiability in inverse reinforcement learning}, author = {Shehab, Mohamad Louai and Aspeel, Antoine and Arechiga, Nikos and Best, Andrew and Ozay, Necmiye}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {1266--1277}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/shehab24a/shehab24a.pdf}, url = {https://proceedings.mlr.press/v242/shehab24a.html}, abstract = {Inverse reinforcement Learning (IRL) has emerged as a powerful paradigm for extracting expert skills from observed behavior, with applications ranging from autonomous systems to humanrobot interaction. However, the identifiability issue within IRL poses a significant challenge, as multiple reward functions can explain the same observed behavior. This paper provides a linear algebraic characterization of several identifiability notions for an entropy-regularized finite horizon Markov decision process (MDP). Moreover, our approach allows for the seamless integration of prior knowledge, in the form of featurized reward functions, to enhance the identifiability of IRL problems. The results are demonstrated with experiments on a grid world environment.} }
Endnote
%0 Conference Paper %T Learning true objectives: Linear algebraic characterizations of identifiability in inverse reinforcement learning %A Mohamad Louai Shehab %A Antoine Aspeel %A Nikos Arechiga %A Andrew Best %A Necmiye Ozay %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-shehab24a %I PMLR %P 1266--1277 %U https://proceedings.mlr.press/v242/shehab24a.html %V 242 %X Inverse reinforcement Learning (IRL) has emerged as a powerful paradigm for extracting expert skills from observed behavior, with applications ranging from autonomous systems to humanrobot interaction. However, the identifiability issue within IRL poses a significant challenge, as multiple reward functions can explain the same observed behavior. This paper provides a linear algebraic characterization of several identifiability notions for an entropy-regularized finite horizon Markov decision process (MDP). Moreover, our approach allows for the seamless integration of prior knowledge, in the form of featurized reward functions, to enhance the identifiability of IRL problems. The results are demonstrated with experiments on a grid world environment.
APA
Shehab, M.L., Aspeel, A., Arechiga, N., Best, A. & Ozay, N.. (2024). Learning true objectives: Linear algebraic characterizations of identifiability in inverse reinforcement learning. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:1266-1277 Available from https://proceedings.mlr.press/v242/shehab24a.html.

Related Material