Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design

Adam Foster, Desi R Ivanova, Ilyas Malik, Tom Rainforth
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3384-3395, 2021.

Abstract

We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. Traditional sequential Bayesian optimal experimental design approaches require substantial computation at each stage of the experiment. This makes them unsuitable for most real-world applications, where decisions must typically be made quickly. DAD addresses this restriction by learning an amortized design network upfront and then using this to rapidly run (multiple) adaptive experiments at deployment time. This network represents a design policy which takes as input the data from previous steps, and outputs the next design using a single forward pass; these design decisions can be made in milliseconds during the live experiment. To train the network, we introduce contrastive information bounds that are suitable objectives for the sequential setting, and propose a customized network architecture that exploits key symmetries. We demonstrate that DAD successfully amortizes the process of experimental design, outperforming alternative strategies on a number of problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-foster21a, title = {Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design}, author = {Foster, Adam and Ivanova, Desi R and Malik, Ilyas and Rainforth, Tom}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3384--3395}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/foster21a/foster21a.pdf}, url = {https://proceedings.mlr.press/v139/foster21a.html}, abstract = {We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. Traditional sequential Bayesian optimal experimental design approaches require substantial computation at each stage of the experiment. This makes them unsuitable for most real-world applications, where decisions must typically be made quickly. DAD addresses this restriction by learning an amortized design network upfront and then using this to rapidly run (multiple) adaptive experiments at deployment time. This network represents a design policy which takes as input the data from previous steps, and outputs the next design using a single forward pass; these design decisions can be made in milliseconds during the live experiment. To train the network, we introduce contrastive information bounds that are suitable objectives for the sequential setting, and propose a customized network architecture that exploits key symmetries. We demonstrate that DAD successfully amortizes the process of experimental design, outperforming alternative strategies on a number of problems.} }
Endnote
%0 Conference Paper %T Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design %A Adam Foster %A Desi R Ivanova %A Ilyas Malik %A Tom Rainforth %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-foster21a %I PMLR %P 3384--3395 %U https://proceedings.mlr.press/v139/foster21a.html %V 139 %X We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. Traditional sequential Bayesian optimal experimental design approaches require substantial computation at each stage of the experiment. This makes them unsuitable for most real-world applications, where decisions must typically be made quickly. DAD addresses this restriction by learning an amortized design network upfront and then using this to rapidly run (multiple) adaptive experiments at deployment time. This network represents a design policy which takes as input the data from previous steps, and outputs the next design using a single forward pass; these design decisions can be made in milliseconds during the live experiment. To train the network, we introduce contrastive information bounds that are suitable objectives for the sequential setting, and propose a customized network architecture that exploits key symmetries. We demonstrate that DAD successfully amortizes the process of experimental design, outperforming alternative strategies on a number of problems.
APA
Foster, A., Ivanova, D.R., Malik, I. & Rainforth, T.. (2021). Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3384-3395 Available from https://proceedings.mlr.press/v139/foster21a.html.

Related Material