[edit]
Minimax optimal testing by classification
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5395-5432, 2023.
Abstract
This paper considers an ML inspired approach to hypothesis testing known as classifier/classification-accuracy testing (CAT). In CAT, one first trains a classifier by feeding it labeled synthetic samples generated by the null and alternative distributions, which is then used to predict labels of the actual data samples. This method is widely used in practice when the null and alternative are only specified via simulators (as in many scientific experiments). We study goodness-of-fit, two-sample (TS) and likelihood-free hypothesis testing (LFHT), and show that CAT achieves (near-)minimax optimal sample complexity in both the dependence on the total-variation (TV) separation ε and the probability of error δ in a variety of non-parametric settings, including discrete distributions, d-dimensional distributions with a smooth density, and the Gaussian sequence model. In particular, we close the high probability sample complexity of LFHT for each class. As another highlight, we recover the minimax optimal complexity of TS over discrete distributions, which was recently established by Diakonikolas et al. (2021). The corresponding CAT simply compares empirical frequencies in the first half of the data, and rejects the null when the classification accuracy on the second half is better than random.