Bayesian experimental design using regularized determinantal point processes

Michal Derezinski, Feynman Liang, Michael Mahoney
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3197-3207, 2020.

Abstract

We establish a fundamental connection between Bayesian experimental design and determinantal point processes (DPPs). Experimental design is a classical task in combinatorial optimization, where we wish to select a small subset of $d$-dimensional vectors to minimize a statistical optimality criterion. We show that a new regularized variant of DPPs can be used to design efficient algorithms for finding $(1+\epsilon)$-approximate solutions to experimental design under four commonly used optimality criteria: A-, C-, D- and V-optimality. A key novelty is that we offer improved guarantees under the Bayesian framework, where prior knowledge is incorporated into the criteria. Our algorithm returns a $(1+\epsilon)$-approximate solution when the subset size $k$ is $\Omega\left(\frac{d_A}{\epsilon} + \frac{\log(11/epsilon)}{\epsilon^2}\right)$, where $d_A << d$ is an effective dimension determined by prior knowledge (via a precision matrix $\mathbf{A}$).This is the first approximation guarantee where the dependence on $d$ is replaced by an effective dimension. Moreover, the time complexity of our algorithm significantly improves on existing approaches with comparable guarantees.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-derezinski20a, title = {Bayesian experimental design using regularized determinantal point processes}, author = {Derezinski, Michal and Liang, Feynman and Mahoney, Michael}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3197--3207}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/derezinski20a/derezinski20a.pdf}, url = {http://proceedings.mlr.press/v108/derezinski20a.html}, abstract = {We establish a fundamental connection between Bayesian experimental design and determinantal point processes (DPPs). Experimental design is a classical task in combinatorial optimization, where we wish to select a small subset of $d$-dimensional vectors to minimize a statistical optimality criterion. We show that a new regularized variant of DPPs can be used to design efficient algorithms for finding $(1+\epsilon)$-approximate solutions to experimental design under four commonly used optimality criteria: A-, C-, D- and V-optimality. A key novelty is that we offer improved guarantees under the Bayesian framework, where prior knowledge is incorporated into the criteria. Our algorithm returns a $(1+\epsilon)$-approximate solution when the subset size $k$ is $\Omega\left(\frac{d_A}{\epsilon} + \frac{\log(11/epsilon)}{\epsilon^2}\right)$, where $d_A << d$ is an effective dimension determined by prior knowledge (via a precision matrix $\mathbf{A}$).This is the first approximation guarantee where the dependence on $d$ is replaced by an effective dimension. Moreover, the time complexity of our algorithm significantly improves on existing approaches with comparable guarantees.} }
Endnote
%0 Conference Paper %T Bayesian experimental design using regularized determinantal point processes %A Michal Derezinski %A Feynman Liang %A Michael Mahoney %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-derezinski20a %I PMLR %P 3197--3207 %U http://proceedings.mlr.press/v108/derezinski20a.html %V 108 %X We establish a fundamental connection between Bayesian experimental design and determinantal point processes (DPPs). Experimental design is a classical task in combinatorial optimization, where we wish to select a small subset of $d$-dimensional vectors to minimize a statistical optimality criterion. We show that a new regularized variant of DPPs can be used to design efficient algorithms for finding $(1+\epsilon)$-approximate solutions to experimental design under four commonly used optimality criteria: A-, C-, D- and V-optimality. A key novelty is that we offer improved guarantees under the Bayesian framework, where prior knowledge is incorporated into the criteria. Our algorithm returns a $(1+\epsilon)$-approximate solution when the subset size $k$ is $\Omega\left(\frac{d_A}{\epsilon} + \frac{\log(11/epsilon)}{\epsilon^2}\right)$, where $d_A << d$ is an effective dimension determined by prior knowledge (via a precision matrix $\mathbf{A}$).This is the first approximation guarantee where the dependence on $d$ is replaced by an effective dimension. Moreover, the time complexity of our algorithm significantly improves on existing approaches with comparable guarantees.
APA
Derezinski, M., Liang, F. & Mahoney, M.. (2020). Bayesian experimental design using regularized determinantal point processes. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3197-3207 Available from http://proceedings.mlr.press/v108/derezinski20a.html.

Related Material