Dimension-free Information Concentration via Exp-Concavity

Ya-ping Hsieh, Volkan Cevher
Proceedings of Algorithmic Learning Theory, PMLR 83:451-469, 2018.

Abstract

Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are \emph{exp-concave}, which is a central notion for fast rates in online and statistical learning, then the concentration of information can be further improved to depend only on the exp-concavity parameter, and hence can be dimension independent. Central to our proof is a novel yet simple application of the variance Brascamp-Lieb inequality. In the context of learning theory, concentration of information immediately implies high-probability results to many of the previous bounds that only hold in expectation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v83-hsieh18a, title = {Dimension-free Information Concentration via Exp-Concavity}, author = {Hsieh, Ya-ping and Cevher, Volkan}, booktitle = {Proceedings of Algorithmic Learning Theory}, pages = {451--469}, year = {2018}, editor = {Janoos, Firdaus and Mohri, Mehryar and Sridharan, Karthik}, volume = {83}, series = {Proceedings of Machine Learning Research}, month = {07--09 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v83/hsieh18a/hsieh18a.pdf}, url = {https://proceedings.mlr.press/v83/hsieh18a.html}, abstract = {Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are \emph{exp-concave}, which is a central notion for fast rates in online and statistical learning, then the concentration of information can be further improved to depend only on the exp-concavity parameter, and hence can be dimension independent. Central to our proof is a novel yet simple application of the variance Brascamp-Lieb inequality. In the context of learning theory, concentration of information immediately implies high-probability results to many of the previous bounds that only hold in expectation.} }
Endnote
%0 Conference Paper %T Dimension-free Information Concentration via Exp-Concavity %A Ya-ping Hsieh %A Volkan Cevher %B Proceedings of Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Firdaus Janoos %E Mehryar Mohri %E Karthik Sridharan %F pmlr-v83-hsieh18a %I PMLR %P 451--469 %U https://proceedings.mlr.press/v83/hsieh18a.html %V 83 %X Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are \emph{exp-concave}, which is a central notion for fast rates in online and statistical learning, then the concentration of information can be further improved to depend only on the exp-concavity parameter, and hence can be dimension independent. Central to our proof is a novel yet simple application of the variance Brascamp-Lieb inequality. In the context of learning theory, concentration of information immediately implies high-probability results to many of the previous bounds that only hold in expectation.
APA
Hsieh, Y. & Cevher, V.. (2018). Dimension-free Information Concentration via Exp-Concavity. Proceedings of Algorithmic Learning Theory, in Proceedings of Machine Learning Research 83:451-469 Available from https://proceedings.mlr.press/v83/hsieh18a.html.

Related Material