[edit]
Beating Monte Carlo Integration: a Nonasymptotic Study of Kernel Smoothing Methods
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:548-556, 2018.
Abstract
Evaluating integrals is an ubiquitous issue and Monte Carlo methods, exploiting advances in random number generation over the last decades, offer a popular and powerful alternative to integration deterministic techniques, unsuited in particular when the domain of integration is complex. This paper is devoted to the study of a kernel smoothing based competitor built from a sequence of $n\geq 1$ i.i.d random vectors with arbitrary continuous probability distribution $f(x)dx$, originally proposed in Delyon et al. (2016), from a nonasymptotic perspective. We establish a probability bound showing that the method under study, though biased, produces an estimate approximating the target integral $\int_{x\in\mathbb{R}^d}\varphi(x)dx$ with an error bound of order $o(1/\sqrt{n})$ uniformly over a class $\Phi$ of functions $\varphi$, under weak complexity/smoothness assumptions related to the class $\Phi$, outperforming Monte-Carlo procedures. This striking result is shown to derive from an appropriate decomposition of the maximal deviation between the target integrals and their estimates, highlighting the remarkable benefit to averaging strongly dependent terms regarding statistical accuracy in this situation. The theoretical analysis then rests on sharp probability inequalities for degenerate $U$-statistics. It is illustrated by numerical results in the context of covariate shift regression, providing empirical evidence of the relevance of the approach.