[edit]
Dimension-free Information Concentration via Exp-Concavity
Proceedings of Algorithmic Learning Theory, PMLR 83:451-469, 2018.
Abstract
Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are \emph{exp-concave}, which is a central notion for fast rates in online and statistical learning, then the concentration of information can be further improved to depend only on the exp-concavity parameter, and hence can be dimension independent. Central to our proof is a novel yet simple application of the variance Brascamp-Lieb inequality. In the context of learning theory, concentration of information immediately implies high-probability results to many of the previous bounds that only hold in expectation.