[edit]
(Bandit) Convex Optimization with Biased Noisy Gradient Oracles
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:819-828, 2016.
Abstract
A popular class of algorithms for convex optimization and online learning with bandit feedback rely on constructing noisy gradient estimates, which are then used in place of the actual gradients in appropriately adjusted first-order algorithms. Depending on the properties of the function to be optimized and the nature of “noise” in the bandit feedback, the bias and variance of gradient estimates exhibit various tradeoffs. In this paper we propose a novel framework that replaces the specific gradient estimation methods with an abstract oracle model. With the help of the new framework we unify previous works, reproducing their results in a clean and concise fashion, while, perhaps more importantly, the framework also allows us to formally show that to achieve the optimal root-n rate either the algorithms that use existing gradient estimators, or the proofs have to go beyond what exists today.