SampleOptimal Parametric QLearning Using Linearly Additive Features
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:69957004, 2019.
Abstract
Consider a Markov decision process (MDP) that admits a set of stateaction features, which can linearly express the process’s probabilistic transition model. We propose a parametric Qlearning algorithm that finds an approximateoptimal policy using a sample size proportional to the feature dimension $K$ and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor stateactions that imply implicit nonnegativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is $\epsilon$optimal from any initial state with high probability using $\widetilde{O}(K/\epsilon^2(1\gamma)^3)$ sample transitions for arbitrarily largescale MDP with a discount factor $\gamma\in(0,1)$. A matching informationtheoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors).
Related Material


