[edit]

# The Sample Complexity of Approximate Rejection Sampling With Applications to Smoothed Online Learning

*Proceedings of Thirty Sixth Conference on Learning Theory*, PMLR 195:228-273, 2023.

#### Abstract

Suppose we are given access to $n$ independent samples from distribution $\mu$ and we wish to output one of them with the goal of making the outputdistributed as close as possible to a target distribution $\nu$. In this workwe show that the optimal total variation distance as a function of $n$ is givenby $\tilde\Theta(\frac{D}{f’(n)})$ over the class of all pairs $\nu,\mu$ with a bounded $f$-divergence $D_f(\nu\|\mu)\leq D$. Previously, this question was studied only for the case when the Radon-Nikodym derivative of $\nu$ with respect to $\mu$ is uniformly bounded. We then consider an application in theseemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithmsstill hold even under relaxed constraints on the adversary (to have bounded $f$-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniformover a function class and compare importance sampling with rejectionsampling.