DoublyCompetitive Distribution Estimation
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:26142623, 2019.
Abstract
Distribution estimation is a statisticallearning cornerstone. Its classical minmax formulation minimizes the estimation error for the worst distribution, hence underperforms for practical distributions that, like powerlaw, are often rather simple. Modern research has therefore focused on two frameworks: structural estimation that improves learning accuracy by assuming a simple structure of the underlying distribution; and competitive, or instanceoptimal, estimation that achieves the performance of a genie aided estimator up to a small excess error that vanishes as the sample size grows, regardless of the distribution. This paper combines and strengthens the two frameworks. It designs a single estimator whose excess error vanishes both at a universal rate as the sample size grows, as well as when the (unknown) distribution gets simpler. We show that the resulting algorithm significantly improves the performance guarantees for numerous competitive and structuralestimation results. The algorithm runs in nearlinear time and is robust to model misspecification and domainsymbol permutations.
Related Material


