A Kernelized Stein Discrepancy for Goodness-of-fit Tests
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:276-284, 2016.
We derive a new discrepancy statistic for measuring differences between two probability distributions based on combining Stein’s identity and the reproducing kernel Hilbert space theory. We apply our result to test how well a probabilistic model fits a set of observations, and derive a new class of powerful goodness-of-fit tests that are widely applicable for complex and high dimensional distributions, even for those with computationally intractable normalization constants. Both theoretical and empirical properties of our methods are studied thoroughly.