Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators

Artur Dubrawski, Jeff Schneider
Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics, PMLR R1:165-172, 1997.

Abstract

This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of finding good parameters for a memory based learner and show the tradeoffs involved in choosing the right amount of computation to spend on each evaluation.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR1-dubrawski97a, title = {Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators}, author = {Dubrawski, Artur and Schneider, Jeff}, booktitle = {Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics}, pages = {165--172}, year = {1997}, editor = {Madigan, David and Smyth, Padhraic}, volume = {R1}, series = {Proceedings of Machine Learning Research}, month = {04--07 Jan}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/r1/dubrawski97a/dubrawski97a.pdf}, url = {https://proceedings.mlr.press/r1/dubrawski97a.html}, abstract = {This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of finding good parameters for a memory based learner and show the tradeoffs involved in choosing the right amount of computation to spend on each evaluation.}, note = {Reissued by PMLR on 30 March 2021.} }
Endnote
%0 Conference Paper %T Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators %A Artur Dubrawski %A Jeff Schneider %B Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 1997 %E David Madigan %E Padhraic Smyth %F pmlr-vR1-dubrawski97a %I PMLR %P 165--172 %U https://proceedings.mlr.press/r1/dubrawski97a.html %V R1 %X This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of finding good parameters for a memory based learner and show the tradeoffs involved in choosing the right amount of computation to spend on each evaluation. %Z Reissued by PMLR on 30 March 2021.
APA
Dubrawski, A. & Schneider, J.. (1997). Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators. Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R1:165-172 Available from https://proceedings.mlr.press/r1/dubrawski97a.html. Reissued by PMLR on 30 March 2021.

Related Material