[edit]
On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:3558-3582, 2022.
Abstract
We study the off-policy evaluation (OPE) problem in an infinite-horizon Markov decision process with continuous states and actions. We recast the Q-function estimation into a special form of the nonparametric instrumental variables (NPIV) estimation problem. We first show that under one mild condition the NPIV formulation of Q-function estimation is well-posed in the sense of L2-measure of ill-posedness with respect to the data generating distribution, bypassing a strong assumption on the discount factor γ imposed in the recent literature for obtaining the L2 convergence rates of various Q-function estimators. Thanks to this new well-posed property, we derive the first minimax lower bounds for the convergence rates of nonparametric estimation of Q-function and its derivatives in both sup-norm and L2-norm, which are shown to be the same as those for the classical nonparametric regression (Stone, 1982). We then propose a sieve two-stage least squares estimator and establish its rate-optimality in both norms under some mild conditions. Our general results on the well-posedness and the minimax lower bounds are of independent interest to study not only other nonparametric estimators for Q-function but also efficient estimation on the value of any target policy in off-policy settings.