Imitation Learning in Discounted Linear MDPs without exploration assumptions

Luca Viano, Stratis Skoulakis, Volkan Cevher
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:49471-49505, 2024.

Abstract

We present a new algorithm for imitation learning in infinite horizon linear MDPs dubbed ILARL which greatly improves the bound on the number of trajectories that the learner needs to sample from the environment. In particular, we remove exploration assumptions required in previous works and we improve the dependence on the desired accuracy $\epsilon$ from $\mathcal{O}(\epsilon^{-5})$ to $\mathcal{O} (\epsilon^{-4})$. Our result relies on a connection between imitation learning and online learning in MDPs with adversarial losses. For the latter setting, we present the first result for infinite horizon linear MDP which may be of independent interest. Moreover, we are able to provide a strengthen result for the finite horizon case where we achieve $\mathcal{O}(\epsilon^{-2})$. Numerical experiments with linear function approximation shows that ILARL outperforms other commonly used algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-viano24a, title = {Imitation Learning in Discounted Linear {MDP}s without exploration assumptions}, author = {Viano, Luca and Skoulakis, Stratis and Cevher, Volkan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {49471--49505}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/viano24a/viano24a.pdf}, url = {https://proceedings.mlr.press/v235/viano24a.html}, abstract = {We present a new algorithm for imitation learning in infinite horizon linear MDPs dubbed ILARL which greatly improves the bound on the number of trajectories that the learner needs to sample from the environment. In particular, we remove exploration assumptions required in previous works and we improve the dependence on the desired accuracy $\epsilon$ from $\mathcal{O}(\epsilon^{-5})$ to $\mathcal{O} (\epsilon^{-4})$. Our result relies on a connection between imitation learning and online learning in MDPs with adversarial losses. For the latter setting, we present the first result for infinite horizon linear MDP which may be of independent interest. Moreover, we are able to provide a strengthen result for the finite horizon case where we achieve $\mathcal{O}(\epsilon^{-2})$. Numerical experiments with linear function approximation shows that ILARL outperforms other commonly used algorithms.} }
Endnote
%0 Conference Paper %T Imitation Learning in Discounted Linear MDPs without exploration assumptions %A Luca Viano %A Stratis Skoulakis %A Volkan Cevher %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-viano24a %I PMLR %P 49471--49505 %U https://proceedings.mlr.press/v235/viano24a.html %V 235 %X We present a new algorithm for imitation learning in infinite horizon linear MDPs dubbed ILARL which greatly improves the bound on the number of trajectories that the learner needs to sample from the environment. In particular, we remove exploration assumptions required in previous works and we improve the dependence on the desired accuracy $\epsilon$ from $\mathcal{O}(\epsilon^{-5})$ to $\mathcal{O} (\epsilon^{-4})$. Our result relies on a connection between imitation learning and online learning in MDPs with adversarial losses. For the latter setting, we present the first result for infinite horizon linear MDP which may be of independent interest. Moreover, we are able to provide a strengthen result for the finite horizon case where we achieve $\mathcal{O}(\epsilon^{-2})$. Numerical experiments with linear function approximation shows that ILARL outperforms other commonly used algorithms.
APA
Viano, L., Skoulakis, S. & Cevher, V.. (2024). Imitation Learning in Discounted Linear MDPs without exploration assumptions. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:49471-49505 Available from https://proceedings.mlr.press/v235/viano24a.html.

Related Material