Derivative Free Optimization Via Repeated Classification

Tatsunori Hashimoto, Steve Yadlowsky, John Duchi
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:2027-2036, 2018.

Abstract

We develop a procedure for minimizing a function using $n$ batched function value measurements at each of $T$ rounds by using classifiers to identify a function’s sublevel set. We show that sufficiently accurate classifiers can achieve linear convergence rates, and show that the convergence rate is tied to the difficulty of active learning sublevel sets. Further, we show that the bootstrap is a computationally efficient approximation to the necessary classification scheme. The end result is a computationally efficient method requiring no tuning that consistently outperforms other methods on simulations, standard benchmarks, real-world DNA binding optimization, and airfoil design problems where batched function queries are natural.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-hashimoto18a, title = {Derivative Free Optimization Via Repeated Classification}, author = {Hashimoto, Tatsunori and Yadlowsky, Steve and Duchi, John}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {2027--2036}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/hashimoto18a/hashimoto18a.pdf}, url = {https://proceedings.mlr.press/v84/hashimoto18a.html}, abstract = {We develop a procedure for minimizing a function using $n$ batched function value measurements at each of $T$ rounds by using classifiers to identify a function’s sublevel set. We show that sufficiently accurate classifiers can achieve linear convergence rates, and show that the convergence rate is tied to the difficulty of active learning sublevel sets. Further, we show that the bootstrap is a computationally efficient approximation to the necessary classification scheme. The end result is a computationally efficient method requiring no tuning that consistently outperforms other methods on simulations, standard benchmarks, real-world DNA binding optimization, and airfoil design problems where batched function queries are natural.} }
Endnote
%0 Conference Paper %T Derivative Free Optimization Via Repeated Classification %A Tatsunori Hashimoto %A Steve Yadlowsky %A John Duchi %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-hashimoto18a %I PMLR %P 2027--2036 %U https://proceedings.mlr.press/v84/hashimoto18a.html %V 84 %X We develop a procedure for minimizing a function using $n$ batched function value measurements at each of $T$ rounds by using classifiers to identify a function’s sublevel set. We show that sufficiently accurate classifiers can achieve linear convergence rates, and show that the convergence rate is tied to the difficulty of active learning sublevel sets. Further, we show that the bootstrap is a computationally efficient approximation to the necessary classification scheme. The end result is a computationally efficient method requiring no tuning that consistently outperforms other methods on simulations, standard benchmarks, real-world DNA binding optimization, and airfoil design problems where batched function queries are natural.
APA
Hashimoto, T., Yadlowsky, S. & Duchi, J.. (2018). Derivative Free Optimization Via Repeated Classification. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:2027-2036 Available from https://proceedings.mlr.press/v84/hashimoto18a.html.

Related Material