Tight Bounds for the Expected Risk of Linear Classifiers and PACBayes FiniteSample Guarantees
[edit]
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:384392, 2014.
Abstract
We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” setting. That is, we analyze the worstcase risk among all data distributions with a given mean and covariance. We provide a simpler proof of the tight polynomialtail bound for general random variables. For subGaussian random variables, we derive a novel tight exponentialtail bound. We also provide new PACBayes finitesample guarantees when training data is available. Our “minimax” generalization bounds are dimensionalityindependent and \mathcalO(\sqrt1/m) for m samples.
Related Material



