A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments

Adam Foster, Martin Jankowiak, Matthew O’Meara, Yee Whye Teh, Tom Rainforth
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2959-2969, 2020.

Abstract

We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). Our approach utilizes variational lower bounds on the expected information gain (EIG) of an experiment that can be simultaneously optimized with respect to both the variational and design parameters. This allows the design process to be carried out through a single unified stochastic gradient ascent procedure, in contrast to existing approaches that typically construct a pointwise EIG estimator, before passing this estimator to a separate optimizer. We provide a number of different variational objectives including the novel adaptive contrastive estimation (ACE) bound. Finally, we show that our gradient-based approaches are able to provide effective design optimization in substantially higher dimensional settings than existing approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-foster20a, title = {A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments}, author = {Foster, Adam and Jankowiak, Martin and O'Meara, Matthew and Teh, Yee Whye and Rainforth, Tom}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2959--2969}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/foster20a/foster20a.pdf}, url = {https://proceedings.mlr.press/v108/foster20a.html}, abstract = {We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). Our approach utilizes variational lower bounds on the expected information gain (EIG) of an experiment that can be simultaneously optimized with respect to both the variational and design parameters. This allows the design process to be carried out through a single unified stochastic gradient ascent procedure, in contrast to existing approaches that typically construct a pointwise EIG estimator, before passing this estimator to a separate optimizer. We provide a number of different variational objectives including the novel adaptive contrastive estimation (ACE) bound. Finally, we show that our gradient-based approaches are able to provide effective design optimization in substantially higher dimensional settings than existing approaches.} }
Endnote
%0 Conference Paper %T A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments %A Adam Foster %A Martin Jankowiak %A Matthew O’Meara %A Yee Whye Teh %A Tom Rainforth %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-foster20a %I PMLR %P 2959--2969 %U https://proceedings.mlr.press/v108/foster20a.html %V 108 %X We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). Our approach utilizes variational lower bounds on the expected information gain (EIG) of an experiment that can be simultaneously optimized with respect to both the variational and design parameters. This allows the design process to be carried out through a single unified stochastic gradient ascent procedure, in contrast to existing approaches that typically construct a pointwise EIG estimator, before passing this estimator to a separate optimizer. We provide a number of different variational objectives including the novel adaptive contrastive estimation (ACE) bound. Finally, we show that our gradient-based approaches are able to provide effective design optimization in substantially higher dimensional settings than existing approaches.
APA
Foster, A., Jankowiak, M., O’Meara, M., Teh, Y.W. & Rainforth, T.. (2020). A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2959-2969 Available from https://proceedings.mlr.press/v108/foster20a.html.

Related Material