[edit]
Approximate Function Evaluation via Multi-Armed Bandits
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:108-135, 2022.
Abstract
We study the problem of estimating the value of a known smooth function f at an unknown point $\mu \in \mathbb{R}^n$, where each component $\mu_i$ can be sampled via a noisy oracle. Sampling more frequently components of $\mu$ corresponding to directions of the function with larger directional derivatives is more sample-efficient. However, as $\mu$ is unknown, the optimal sampling frequencies are also unknown. We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-\delta$ returns an $\epsilon$ accurate estimate of $f(\mu)$. We generalize our algorithm to adapt to heteroskedastic noise, and prove asymptotic optimality when f is linear. We corroborate our theoretical results with numerical experiments, showing the dramatic gains afforded by adaptivity.