On the Sample Complexity of Stability Constrained Imitation Learning

Stephen Tu, Alexander Robey, Tingnan Zhang, Nikolai Matni
Proceedings of The 4th Annual Learning for Dynamics and Control Conference, PMLR 168:180-191, 2022.

Abstract

We study the following question in the context of imitation learning for continuous control: how are the underlying stability properties of an expert policy reflected in the sample complexity of an imitation learning task? We provide the first results showing that a granular connection can be made between the expert system’s incremental gain stability, a novel measure of robust convergence between pairs of system trajectories, and the dependency on the task horizon T of the resulting generalization bounds. As a special case, we delineate a class of systems for which the number of trajectories needed to achieve epsilon-suboptimality is sublinear in the task horizon T, and do so without requiring (strong) convexity of the loss function in the policy parameters. Finally, we conduct numerical experiments demonstrating the validity of our insights on both a simple nonlinear system with tunable stability properties, and on a high-dimensional quadrupedal robotic simulation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v168-tu22a, title = {On the Sample Complexity of Stability Constrained Imitation Learning}, author = {Tu, Stephen and Robey, Alexander and Zhang, Tingnan and Matni, Nikolai}, booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference}, pages = {180--191}, year = {2022}, editor = {Firoozi, Roya and Mehr, Negar and Yel, Esen and Antonova, Rika and Bohg, Jeannette and Schwager, Mac and Kochenderfer, Mykel}, volume = {168}, series = {Proceedings of Machine Learning Research}, month = {23--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v168/tu22a/tu22a.pdf}, url = {https://proceedings.mlr.press/v168/tu22a.html}, abstract = {We study the following question in the context of imitation learning for continuous control: how are the underlying stability properties of an expert policy reflected in the sample complexity of an imitation learning task? We provide the first results showing that a granular connection can be made between the expert system’s incremental gain stability, a novel measure of robust convergence between pairs of system trajectories, and the dependency on the task horizon T of the resulting generalization bounds. As a special case, we delineate a class of systems for which the number of trajectories needed to achieve epsilon-suboptimality is sublinear in the task horizon T, and do so without requiring (strong) convexity of the loss function in the policy parameters. Finally, we conduct numerical experiments demonstrating the validity of our insights on both a simple nonlinear system with tunable stability properties, and on a high-dimensional quadrupedal robotic simulation.} }
Endnote
%0 Conference Paper %T On the Sample Complexity of Stability Constrained Imitation Learning %A Stephen Tu %A Alexander Robey %A Tingnan Zhang %A Nikolai Matni %B Proceedings of The 4th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2022 %E Roya Firoozi %E Negar Mehr %E Esen Yel %E Rika Antonova %E Jeannette Bohg %E Mac Schwager %E Mykel Kochenderfer %F pmlr-v168-tu22a %I PMLR %P 180--191 %U https://proceedings.mlr.press/v168/tu22a.html %V 168 %X We study the following question in the context of imitation learning for continuous control: how are the underlying stability properties of an expert policy reflected in the sample complexity of an imitation learning task? We provide the first results showing that a granular connection can be made between the expert system’s incremental gain stability, a novel measure of robust convergence between pairs of system trajectories, and the dependency on the task horizon T of the resulting generalization bounds. As a special case, we delineate a class of systems for which the number of trajectories needed to achieve epsilon-suboptimality is sublinear in the task horizon T, and do so without requiring (strong) convexity of the loss function in the policy parameters. Finally, we conduct numerical experiments demonstrating the validity of our insights on both a simple nonlinear system with tunable stability properties, and on a high-dimensional quadrupedal robotic simulation.
APA
Tu, S., Robey, A., Zhang, T. & Matni, N.. (2022). On the Sample Complexity of Stability Constrained Imitation Learning. Proceedings of The 4th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 168:180-191 Available from https://proceedings.mlr.press/v168/tu22a.html.

Related Material