[edit]
Multi-Task Imitation Learning for Linear Dynamical Systems
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:586-599, 2023.
Abstract
We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared k-dimensional representation is learned from H source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by ˜O(knxHNshared+knuNtarget), where nx>k is the state dimension, nu is the input dimension, Nshared denotes the total amount of data collected for each policy during representation learning, and Ntarget is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation.