The Gap Between ModelBased and ModelFree Methods on the Linear Quadratic Regulator: An Asymptotic Viewpoint
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:30363083, 2019.
Abstract
The effectiveness of modelbased versus modelfree methods is a longstanding question in reinforcement learning (RL). Motivated by recent empirical success of RL on continuous control tasks, we study the sample complexity of popular modelbased and modelfree algorithms on the Linear Quadratic Regulator (LQR). We show that for policy evaluation, a simple modelbased plugin method requires asymptotically less samples than the classical leastsquares temporal difference (LSTD) estimator to reach the same quality of solution; the sample complexity gap between the two methods can be at least a factor of state dimension. For policy optimization, we study a simple family of problem instances and show that nominal (certainty equivalence principle) control also requires several factors of state and input dimension fewer samples than the policy gradient method to reach the same level of control performance on these instances. Furthermore, the gap persists even when employing baselines commonly used in practice. To the best of our knowledge, this is the first theoretical result which demonstrates a separation in the sample complexity between modelbased and modelfree methods on a continuous control task.
Related Material


