On the (In)tractability of Computing Normalizing Constants for the Product of Determinantal Point Processes

Naoto Ohsaka, Tatsuya Matsuoka
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7414-7423, 2020.

Abstract

We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task, unless input matrices are forced to have favorable structures. In particular, we prove the following: (1) Computing $\sum_{S} \det(\mathbf{A}_{S,S})^p$ exactly for every (fixed) positive even integer $p$ is $\textsf{UP}$-hard and $\textsf{Mod}_3\textsf{P}$-hard, which gives a negative answer to an open question posed by Kulesza and Taskar (2012). (2) $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S}) \det(\mathbf{C}_{S,S})$ is $\textsf{NP}$-hard to approximate within a factor of $ 2^{\mathcal{O}(|\mathcal{I}|^{1-\epsilon})} $ for any $\epsilon > 0$, where $|\mathcal{I}|$ is the input size. This result is stronger than $\sharp\textsf{P}$-hardness for the case of two matrices by Gillenwater (2014). (3) There exists a $ k^{\mathcal{O}(k)} |\mathcal{I}|^{\mathcal{O}(1)} $-time algorithm for computing $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S})$, where $k$ is “the maximum rank of $\mathbf{A}$ and $\mathbf{B}$” or “the treewidth of the graph formed by nonzero entries of $\mathbf{A}$ and $\mathbf{B}$.” Such parameterized algorithms are said to be fixed-parameter tractable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ohsaka20a, title = {On the ({I}n)tractability of Computing Normalizing Constants for the Product of Determinantal Point Processes}, author = {Ohsaka, Naoto and Matsuoka, Tatsuya}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7414--7423}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ohsaka20a/ohsaka20a.pdf}, url = {https://proceedings.mlr.press/v119/ohsaka20a.html}, abstract = {We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task, unless input matrices are forced to have favorable structures. In particular, we prove the following: (1) Computing $\sum_{S} \det(\mathbf{A}_{S,S})^p$ exactly for every (fixed) positive even integer $p$ is $\textsf{UP}$-hard and $\textsf{Mod}_3\textsf{P}$-hard, which gives a negative answer to an open question posed by Kulesza and Taskar (2012). (2) $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S}) \det(\mathbf{C}_{S,S})$ is $\textsf{NP}$-hard to approximate within a factor of $ 2^{\mathcal{O}(|\mathcal{I}|^{1-\epsilon})} $ for any $\epsilon > 0$, where $|\mathcal{I}|$ is the input size. This result is stronger than $\sharp\textsf{P}$-hardness for the case of two matrices by Gillenwater (2014). (3) There exists a $ k^{\mathcal{O}(k)} |\mathcal{I}|^{\mathcal{O}(1)} $-time algorithm for computing $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S})$, where $k$ is “the maximum rank of $\mathbf{A}$ and $\mathbf{B}$” or “the treewidth of the graph formed by nonzero entries of $\mathbf{A}$ and $\mathbf{B}$.” Such parameterized algorithms are said to be fixed-parameter tractable.} }
Endnote
%0 Conference Paper %T On the (In)tractability of Computing Normalizing Constants for the Product of Determinantal Point Processes %A Naoto Ohsaka %A Tatsuya Matsuoka %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ohsaka20a %I PMLR %P 7414--7423 %U https://proceedings.mlr.press/v119/ohsaka20a.html %V 119 %X We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task, unless input matrices are forced to have favorable structures. In particular, we prove the following: (1) Computing $\sum_{S} \det(\mathbf{A}_{S,S})^p$ exactly for every (fixed) positive even integer $p$ is $\textsf{UP}$-hard and $\textsf{Mod}_3\textsf{P}$-hard, which gives a negative answer to an open question posed by Kulesza and Taskar (2012). (2) $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S}) \det(\mathbf{C}_{S,S})$ is $\textsf{NP}$-hard to approximate within a factor of $ 2^{\mathcal{O}(|\mathcal{I}|^{1-\epsilon})} $ for any $\epsilon > 0$, where $|\mathcal{I}|$ is the input size. This result is stronger than $\sharp\textsf{P}$-hardness for the case of two matrices by Gillenwater (2014). (3) There exists a $ k^{\mathcal{O}(k)} |\mathcal{I}|^{\mathcal{O}(1)} $-time algorithm for computing $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S})$, where $k$ is “the maximum rank of $\mathbf{A}$ and $\mathbf{B}$” or “the treewidth of the graph formed by nonzero entries of $\mathbf{A}$ and $\mathbf{B}$.” Such parameterized algorithms are said to be fixed-parameter tractable.
APA
Ohsaka, N. & Matsuoka, T.. (2020). On the (In)tractability of Computing Normalizing Constants for the Product of Determinantal Point Processes. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7414-7423 Available from https://proceedings.mlr.press/v119/ohsaka20a.html.

Related Material