Stochastic Zeroth-order Optimization in High Dimensions

Yining Wang, Simon Du, Sivaraman Balakrishnan, Aarti Singh
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1356-1365, 2018.

Abstract

We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only logarithmically on the ambient dimension of the problem. Empirical results confirm our theoretical findings and show that the algorithms we design outperform classical zeroth-order optimization methods in the high-dimensional setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-wang18e, title = {Stochastic Zeroth-order Optimization in High Dimensions}, author = {Wang, Yining and Du, Simon and Balakrishnan, Sivaraman and Singh, Aarti}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {1356--1365}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/wang18e/wang18e.pdf}, url = {https://proceedings.mlr.press/v84/wang18e.html}, abstract = {We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only logarithmically on the ambient dimension of the problem. Empirical results confirm our theoretical findings and show that the algorithms we design outperform classical zeroth-order optimization methods in the high-dimensional setting.} }
Endnote
%0 Conference Paper %T Stochastic Zeroth-order Optimization in High Dimensions %A Yining Wang %A Simon Du %A Sivaraman Balakrishnan %A Aarti Singh %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-wang18e %I PMLR %P 1356--1365 %U https://proceedings.mlr.press/v84/wang18e.html %V 84 %X We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only logarithmically on the ambient dimension of the problem. Empirical results confirm our theoretical findings and show that the algorithms we design outperform classical zeroth-order optimization methods in the high-dimensional setting.
APA
Wang, Y., Du, S., Balakrishnan, S. & Singh, A.. (2018). Stochastic Zeroth-order Optimization in High Dimensions. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:1356-1365 Available from https://proceedings.mlr.press/v84/wang18e.html.

Related Material