Expensive Function Optimization with Stochastic Binary Outcomes

Matthew Tesch, Jeff Schneider, Howie Choset
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1283-1291, 2013.

Abstract

Real world systems often have parameterized controllers which can be tuned to improve performance. Bayesian optimization methods provide for efficient optimization of these controllers, so as to reduce the number of required experiments on the expensive physical system. In this paper we address Bayesian optimization in the setting where performance is only observed through a stochastic binary outcome – success or failure of the experiment. Unlike bandit problems, the goal is to maximize the system performance after this offline training phase rather than minimize regret during training. In this work we define the stochastic binary optimization problem and propose an approach using an adaptation of Gaussian Processes for classification that presents a Bayesian optimization framework for this problem. We propose an experiment selection metric for this setting based on expected improvement. We demonstrate the algorithm’s performance on synthetic problems and on a real snake robot learning to move over an obstacle.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-tesch13, title = {Expensive Function Optimization with Stochastic Binary Outcomes}, author = {Tesch, Matthew and Schneider, Jeff and Choset, Howie}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1283--1291}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/tesch13.pdf}, url = {https://proceedings.mlr.press/v28/tesch13.html}, abstract = {Real world systems often have parameterized controllers which can be tuned to improve performance. Bayesian optimization methods provide for efficient optimization of these controllers, so as to reduce the number of required experiments on the expensive physical system. In this paper we address Bayesian optimization in the setting where performance is only observed through a stochastic binary outcome – success or failure of the experiment. Unlike bandit problems, the goal is to maximize the system performance after this offline training phase rather than minimize regret during training. In this work we define the stochastic binary optimization problem and propose an approach using an adaptation of Gaussian Processes for classification that presents a Bayesian optimization framework for this problem. We propose an experiment selection metric for this setting based on expected improvement. We demonstrate the algorithm’s performance on synthetic problems and on a real snake robot learning to move over an obstacle.} }
Endnote
%0 Conference Paper %T Expensive Function Optimization with Stochastic Binary Outcomes %A Matthew Tesch %A Jeff Schneider %A Howie Choset %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-tesch13 %I PMLR %P 1283--1291 %U https://proceedings.mlr.press/v28/tesch13.html %V 28 %N 3 %X Real world systems often have parameterized controllers which can be tuned to improve performance. Bayesian optimization methods provide for efficient optimization of these controllers, so as to reduce the number of required experiments on the expensive physical system. In this paper we address Bayesian optimization in the setting where performance is only observed through a stochastic binary outcome – success or failure of the experiment. Unlike bandit problems, the goal is to maximize the system performance after this offline training phase rather than minimize regret during training. In this work we define the stochastic binary optimization problem and propose an approach using an adaptation of Gaussian Processes for classification that presents a Bayesian optimization framework for this problem. We propose an experiment selection metric for this setting based on expected improvement. We demonstrate the algorithm’s performance on synthetic problems and on a real snake robot learning to move over an obstacle.
RIS
TY - CPAPER TI - Expensive Function Optimization with Stochastic Binary Outcomes AU - Matthew Tesch AU - Jeff Schneider AU - Howie Choset BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-tesch13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1283 EP - 1291 L1 - http://proceedings.mlr.press/v28/tesch13.pdf UR - https://proceedings.mlr.press/v28/tesch13.html AB - Real world systems often have parameterized controllers which can be tuned to improve performance. Bayesian optimization methods provide for efficient optimization of these controllers, so as to reduce the number of required experiments on the expensive physical system. In this paper we address Bayesian optimization in the setting where performance is only observed through a stochastic binary outcome – success or failure of the experiment. Unlike bandit problems, the goal is to maximize the system performance after this offline training phase rather than minimize regret during training. In this work we define the stochastic binary optimization problem and propose an approach using an adaptation of Gaussian Processes for classification that presents a Bayesian optimization framework for this problem. We propose an experiment selection metric for this setting based on expected improvement. We demonstrate the algorithm’s performance on synthetic problems and on a real snake robot learning to move over an obstacle. ER -
APA
Tesch, M., Schneider, J. & Choset, H.. (2013). Expensive Function Optimization with Stochastic Binary Outcomes. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1283-1291 Available from https://proceedings.mlr.press/v28/tesch13.html.

Related Material