Miles Olson, Elizabeth Santorella, Louis C. Tiao, Sait Cakmak, Mia Garrard, Samuel Daulton, Zhiyuan Jerry Lin, Sebastian Ament, Bernard Beckerman, Eric Onofrey, Paschal Igusti, Cristian Lara, Benjamin Letham, Cesar Cardoso, Shiyun Sunny Shen, Andy Chenyuan Lin, Matthew Grange, Elena Kashtelyan, David Eriksson, Maximilian Balandat, Eytan Bakshy
Proceedings of the Fourth International Conference on Automated Machine Learning, PMLR 293:21/1-25, 2025.
Abstract
Optimizing industry-scale machine learning systems involves resource-intensive black-box optimization. Adaptive experimentation substantially improves the sample efficiency of such tasks compared with naive baselines (such as grid or random search) by utilizing surrogate models and sequential optimization algorithms. Ax \url(https://ax.dev) is an open-source platform for adaptive experimentation. Ax is highly extensible and full-featured, and is used at scale at Meta. We discuss Ax’s design, usage, and performance. Off the shelf, Ax achieves state-of-the-art performance in a wide range of synthetic and real-world black-box optimization tasks in machine learning, engineering, and science.
Cite this Paper
BibTeX
@InProceedings{pmlr-v293-olson25a,
title = {Ax: A Platform for Adaptive Experimentation},
author = {Olson, Miles and Santorella, Elizabeth and Tiao, Louis C. and Cakmak, Sait and Garrard, Mia and Daulton, Samuel and Lin, Zhiyuan Jerry and Ament, Sebastian and Beckerman, Bernard and Onofrey, Eric and Igusti, Paschal and Lara, Cristian and Letham, Benjamin and Cardoso, Cesar and Shen, Shiyun Sunny and Lin, Andy Chenyuan and Grange, Matthew and Kashtelyan, Elena and Eriksson, David and Balandat, Maximilian and Bakshy, Eytan},
booktitle = {Proceedings of the Fourth International Conference on Automated Machine Learning},
pages = {21/1--25},
year = {2025},
editor = {Akoglu, Leman and Doerr, Carola and van Rijn, Jan N. and Garnett, Roman and Gardner, Jacob R.},
volume = {293},
series = {Proceedings of Machine Learning Research},
month = {08--11 Sep},
publisher = {PMLR},
pdf = {https://raw.githubusercontent.com/mlresearch/v293/main/assets/olson25a/olson25a.pdf},
url = {https://proceedings.mlr.press/v293/olson25a.html},
abstract = {Optimizing industry-scale machine learning systems involves resource-intensive black-box optimization. Adaptive experimentation substantially improves the sample efficiency of such tasks compared with naive baselines (such as grid or random search) by utilizing surrogate models and sequential optimization algorithms. Ax \url(https://ax.dev) is an open-source platform for adaptive experimentation. Ax is highly extensible and full-featured, and is used at scale at Meta. We discuss Ax’s design, usage, and performance. Off the shelf, Ax achieves state-of-the-art performance in a wide range of synthetic and real-world black-box optimization tasks in machine learning, engineering, and science.}
}
Endnote
%0 Conference Paper
%T Ax: A Platform for Adaptive Experimentation
%A Miles Olson
%A Elizabeth Santorella
%A Louis C. Tiao
%A Sait Cakmak
%A Mia Garrard
%A Samuel Daulton
%A Zhiyuan Jerry Lin
%A Sebastian Ament
%A Bernard Beckerman
%A Eric Onofrey
%A Paschal Igusti
%A Cristian Lara
%A Benjamin Letham
%A Cesar Cardoso
%A Shiyun Sunny Shen
%A Andy Chenyuan Lin
%A Matthew Grange
%A Elena Kashtelyan
%A David Eriksson
%A Maximilian Balandat
%A Eytan Bakshy
%B Proceedings of the Fourth International Conference on Automated Machine Learning
%C Proceedings of Machine Learning Research
%D 2025
%E Leman Akoglu
%E Carola Doerr
%E Jan N. van Rijn
%E Roman Garnett
%E Jacob R. Gardner
%F pmlr-v293-olson25a
%I PMLR
%P 21/1--25
%U https://proceedings.mlr.press/v293/olson25a.html
%V 293
%X Optimizing industry-scale machine learning systems involves resource-intensive black-box optimization. Adaptive experimentation substantially improves the sample efficiency of such tasks compared with naive baselines (such as grid or random search) by utilizing surrogate models and sequential optimization algorithms. Ax \url(https://ax.dev) is an open-source platform for adaptive experimentation. Ax is highly extensible and full-featured, and is used at scale at Meta. We discuss Ax’s design, usage, and performance. Off the shelf, Ax achieves state-of-the-art performance in a wide range of synthetic and real-world black-box optimization tasks in machine learning, engineering, and science.
APA
Olson, M., Santorella, E., Tiao, L.C., Cakmak, S., Garrard, M., Daulton, S., Lin, Z.J., Ament, S., Beckerman, B., Onofrey, E., Igusti, P., Lara, C., Letham, B., Cardoso, C., Shen, S.S., Lin, A.C., Grange, M., Kashtelyan, E., Eriksson, D., Balandat, M. & Bakshy, E.. (2025). Ax: A Platform for Adaptive Experimentation. Proceedings of the Fourth International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 293:21/1-25 Available from https://proceedings.mlr.press/v293/olson25a.html.