Structured Variationally Autoencoded Optimization
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:32673275, 2018.
Abstract
We tackle the problem of optimizing a blackbox objective function defined over a highlystructured input space. This problem is ubiquitous in science and engineering. In machine learning, inferring the structure of a neural network or the Automatic Statistician (AS), where the optimal kernel combination for a Gaussian process is selected, are two important examples. We use the \as as a case study to describe our approach, that can be easily generalized to other domains. We propose an Structure Generating Variational Autoencoder (SGVAE) to embed the original space of kernel combinations into some lowdimensional continuous manifold where Bayesian optimization (BO) ideas are used. This is possible when structural knowledge of the problem is available, which can be given via a simulator or any other form of generating potentially good solutions. The right explorationexploitation balance is imposed by propagating into the search the uncertainty of the latent space of the SGVAE, that is computed using variational inference. The key aspect of our approach is that the SGVAE can be used to bias the search towards relevant regions, making it suitable for transfer learning tasks. Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work.
Related Material


