[edit]
Unconstrained MAP Inference, Exponentiated Determinantal Point Processes, and Exponential Inapproximability
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:154-162, 2021.
Abstract
We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter p. We prove the following complexity-theoretic hardness results that explain the difficulty in approximating unconstrained MAP inference and the normalizing constant for E-DPPs. (1) Unconstrained MAP inference for an n×n matrix is NP-hard to approximate within a 2βn-factor, where β=10−1013. This result improves upon a (9/8−ϵ)-factor inapproximability given by Kulesza and Taskar (2012). (2) The normalizing constant for E-DPPs of any (fixed) constant exponent p≥β−1=101013 is NP-hard to approximate within a 2βpn-factor. This gives a(nother) negative answer to open questions posed by Kulesza and Taskar (2012); Ohsaka and Matsuoka (2020).