Hardness of Maximum Likelihood Learning of DPPs

Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3800-3819, 2022.

Abstract

Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively correlated sets. DPPs are used in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, the parameters of the DPP need to be fit to match the data; typically, we seek a set of parameters that maximize the likelihood of the data. The algorithms used for this task either optimize over a limited family of DPPs, or else use local improvement heuristics that do not provide theoretical guarantees of optimality. It is natural to ask if there exist efficient algorithms for finding a maximum likelihood DPP model for a given data set. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2012) that the problem is NP-complete. In this work we prove Kulesza’s conjecture: we prove moreover, that even computing a $1-\frac{1}{\mathrm{poly} \log N}$-approximation to the maximum log-likelihood of a DPP on a set of $N$ items is NP-complete. At the same time, we also obtain the first polynomial-time algorithm obtaining a nontrivial worst-case approximation to the optimal likelihood: we present a polynomial-time $1/\log m$-approximation algorithm (for data sets of size $m$), which moreover obtains a $1-\frac{1}{\log N}$-approximation if all $N$ elements appear in a $O(1/N)$-fraction of the subsets. In terms of techniques, the hardness result reduces to solving a gap instance of a “vector coloring" problem on a hypergraph obtained from an adaptation of the constructions of Bogdanov, Obata and Trevisan (FOCS 2002), using the strong expanders of Alon and Capalbo (FOCS 2007).

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-grigorescu22a, title = {Hardness of Maximum Likelihood Learning of DPPs}, author = {Grigorescu, Elena and Juba, Brendan and Wimmer, Karl and Xie, Ning}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {3800--3819}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/grigorescu22a/grigorescu22a.pdf}, url = {https://proceedings.mlr.press/v178/grigorescu22a.html}, abstract = {Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively correlated sets. DPPs are used in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, the parameters of the DPP need to be fit to match the data; typically, we seek a set of parameters that maximize the likelihood of the data. The algorithms used for this task either optimize over a limited family of DPPs, or else use local improvement heuristics that do not provide theoretical guarantees of optimality. It is natural to ask if there exist efficient algorithms for finding a maximum likelihood DPP model for a given data set. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2012) that the problem is NP-complete. In this work we prove Kulesza’s conjecture: we prove moreover, that even computing a $1-\frac{1}{\mathrm{poly} \log N}$-approximation to the maximum log-likelihood of a DPP on a set of $N$ items is NP-complete. At the same time, we also obtain the first polynomial-time algorithm obtaining a nontrivial worst-case approximation to the optimal likelihood: we present a polynomial-time $1/\log m$-approximation algorithm (for data sets of size $m$), which moreover obtains a $1-\frac{1}{\log N}$-approximation if all $N$ elements appear in a $O(1/N)$-fraction of the subsets. In terms of techniques, the hardness result reduces to solving a gap instance of a “vector coloring" problem on a hypergraph obtained from an adaptation of the constructions of Bogdanov, Obata and Trevisan (FOCS 2002), using the strong expanders of Alon and Capalbo (FOCS 2007).} }
Endnote
%0 Conference Paper %T Hardness of Maximum Likelihood Learning of DPPs %A Elena Grigorescu %A Brendan Juba %A Karl Wimmer %A Ning Xie %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-grigorescu22a %I PMLR %P 3800--3819 %U https://proceedings.mlr.press/v178/grigorescu22a.html %V 178 %X Determinantal Point Processes (DPPs) are a widely used probabilistic model for negatively correlated sets. DPPs are used in Machine Learning applications to select a diverse, yet representative subset of data. In these applications, the parameters of the DPP need to be fit to match the data; typically, we seek a set of parameters that maximize the likelihood of the data. The algorithms used for this task either optimize over a limited family of DPPs, or else use local improvement heuristics that do not provide theoretical guarantees of optimality. It is natural to ask if there exist efficient algorithms for finding a maximum likelihood DPP model for a given data set. In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2012) that the problem is NP-complete. In this work we prove Kulesza’s conjecture: we prove moreover, that even computing a $1-\frac{1}{\mathrm{poly} \log N}$-approximation to the maximum log-likelihood of a DPP on a set of $N$ items is NP-complete. At the same time, we also obtain the first polynomial-time algorithm obtaining a nontrivial worst-case approximation to the optimal likelihood: we present a polynomial-time $1/\log m$-approximation algorithm (for data sets of size $m$), which moreover obtains a $1-\frac{1}{\log N}$-approximation if all $N$ elements appear in a $O(1/N)$-fraction of the subsets. In terms of techniques, the hardness result reduces to solving a gap instance of a “vector coloring" problem on a hypergraph obtained from an adaptation of the constructions of Bogdanov, Obata and Trevisan (FOCS 2002), using the strong expanders of Alon and Capalbo (FOCS 2007).
APA
Grigorescu, E., Juba, B., Wimmer, K. & Xie, N.. (2022). Hardness of Maximum Likelihood Learning of DPPs. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:3800-3819 Available from https://proceedings.mlr.press/v178/grigorescu22a.html.

Related Material