Derivative Free Optimization Via Repeated Classification

[edit]

Tatsunori Hashimoto, Steve Yadlowsky, John Duchi ;
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:2027-2036, 2018.

Abstract

We develop a procedure for minimizing a function using $n$ batched function value measurements at each of $T$ rounds by using classifiers to identify a function’s sublevel set. We show that sufficiently accurate classifiers can achieve linear convergence rates, and show that the convergence rate is tied to the difficulty of active learning sublevel sets. Further, we show that the bootstrap is a computationally efficient approximation to the necessary classification scheme. The end result is a computationally efficient method requiring no tuning that consistently outperforms other methods on simulations, standard benchmarks, real-world DNA binding optimization, and airfoil design problems where batched function queries are natural.

Related Material