Testing Identity of Multidimensional Histograms

Ilias Diakonikolas, Daniel M. Kane, John Peebles
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1107-1131, 2019.

Abstract

We investigate the problem of identity testing for multidimensional histogram distributions. A distribution $p: D \to \mathbb{R}_+$, where $D \subseteq \mathbb{R}^d$, is called a $k$-histogram if there exists a partition of the domain into $k$ axis-aligned rectangles such that $p$ is constant within each such rectangle. Histograms are one of the most fundamental nonparametric families of distributions and have been extensively studied in computer science and statistics. We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, let $q$ be an unknown $d$-dimensional $k$-histogram distribution in fixed dimension $d$, and $p$ be an explicitly given $d$-dimensional $k$-histogram. We want to correctly distinguish, with probability at least $2/3$, between the case that $p = q$ versus $\|p-q\|_1 \geq \epsilon$. We design an algorithm for this hypothesis testing problem with sample complexity $O((\sqrt{k}/\epsilon^2) 2^{d/2} \log^{2.5 d}(k/\epsilon))$ that runs in sample-polynomial time. Our algorithm is robust to model misspecification, i.e., succeeds even if $q$ is only promised to be {\em close} to a $k$-histogram. Moreover, for $k = 2^{\Omega(d)}$, we show a sample complexity lower bound of $(\sqrt{k}/\epsilon^2) \cdot \Omega(\log(k)/d)^{d-1}$ when $d\geq 2$. That is, for any fixed dimension $d$, our upper and lower bounds are nearly matching. Prior to our work, the sample complexity of the $d=1$ case was well-understood, but no algorithm with sub-learning sample complexity was known, even for $d=2$. Our new upper and lower bounds have interesting conceptual implications regarding the relation between learning and testing in this setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-diakonikolas19b, title = {Testing Identity of Multidimensional Histograms}, author = {Diakonikolas, Ilias and Kane, Daniel M. and Peebles, John}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1107--1131}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/diakonikolas19b/diakonikolas19b.pdf}, url = {https://proceedings.mlr.press/v99/diakonikolas19b.html}, abstract = {We investigate the problem of identity testing for multidimensional histogram distributions. A distribution $p: D \to \mathbb{R}_+$, where $D \subseteq \mathbb{R}^d$, is called a $k$-histogram if there exists a partition of the domain into $k$ axis-aligned rectangles such that $p$ is constant within each such rectangle. Histograms are one of the most fundamental nonparametric families of distributions and have been extensively studied in computer science and statistics. We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, let $q$ be an unknown $d$-dimensional $k$-histogram distribution in fixed dimension $d$, and $p$ be an explicitly given $d$-dimensional $k$-histogram. We want to correctly distinguish, with probability at least $2/3$, between the case that $p = q$ versus $\|p-q\|_1 \geq \epsilon$. We design an algorithm for this hypothesis testing problem with sample complexity $O((\sqrt{k}/\epsilon^2) 2^{d/2} \log^{2.5 d}(k/\epsilon))$ that runs in sample-polynomial time. Our algorithm is robust to model misspecification, i.e., succeeds even if $q$ is only promised to be {\em close} to a $k$-histogram. Moreover, for $k = 2^{\Omega(d)}$, we show a sample complexity lower bound of $(\sqrt{k}/\epsilon^2) \cdot \Omega(\log(k)/d)^{d-1}$ when $d\geq 2$. That is, for any fixed dimension $d$, our upper and lower bounds are nearly matching. Prior to our work, the sample complexity of the $d=1$ case was well-understood, but no algorithm with sub-learning sample complexity was known, even for $d=2$. Our new upper and lower bounds have interesting conceptual implications regarding the relation between learning and testing in this setting.} }
Endnote
%0 Conference Paper %T Testing Identity of Multidimensional Histograms %A Ilias Diakonikolas %A Daniel M. Kane %A John Peebles %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-diakonikolas19b %I PMLR %P 1107--1131 %U https://proceedings.mlr.press/v99/diakonikolas19b.html %V 99 %X We investigate the problem of identity testing for multidimensional histogram distributions. A distribution $p: D \to \mathbb{R}_+$, where $D \subseteq \mathbb{R}^d$, is called a $k$-histogram if there exists a partition of the domain into $k$ axis-aligned rectangles such that $p$ is constant within each such rectangle. Histograms are one of the most fundamental nonparametric families of distributions and have been extensively studied in computer science and statistics. We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, let $q$ be an unknown $d$-dimensional $k$-histogram distribution in fixed dimension $d$, and $p$ be an explicitly given $d$-dimensional $k$-histogram. We want to correctly distinguish, with probability at least $2/3$, between the case that $p = q$ versus $\|p-q\|_1 \geq \epsilon$. We design an algorithm for this hypothesis testing problem with sample complexity $O((\sqrt{k}/\epsilon^2) 2^{d/2} \log^{2.5 d}(k/\epsilon))$ that runs in sample-polynomial time. Our algorithm is robust to model misspecification, i.e., succeeds even if $q$ is only promised to be {\em close} to a $k$-histogram. Moreover, for $k = 2^{\Omega(d)}$, we show a sample complexity lower bound of $(\sqrt{k}/\epsilon^2) \cdot \Omega(\log(k)/d)^{d-1}$ when $d\geq 2$. That is, for any fixed dimension $d$, our upper and lower bounds are nearly matching. Prior to our work, the sample complexity of the $d=1$ case was well-understood, but no algorithm with sub-learning sample complexity was known, even for $d=2$. Our new upper and lower bounds have interesting conceptual implications regarding the relation between learning and testing in this setting.
APA
Diakonikolas, I., Kane, D.M. & Peebles, J.. (2019). Testing Identity of Multidimensional Histograms. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1107-1131 Available from https://proceedings.mlr.press/v99/diakonikolas19b.html.

Related Material