[edit]
Black-box Optimization with a Politician
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1624-1631, 2016.
Abstract
We propose a new framework for black-box convex optimization which is well-suited for situations where gradient computations are expensive. We derive a new method for this framework which leverages several concepts from convex optimization, from standard first-order methods (e.g. gradient descent or quasi-Newton methods) to analytical centers (i.e. minimizers of self-concordant barriers). We demonstrate empirically that our new technique compares favorably with state of the art algorithms (such as BFGS).