[edit]
On Margins and Derandomisation in PAC-Bayes
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3709-3731, 2022.
Abstract
We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop straightforwardly lead to margin bounds for various classifiers, including linear prediction—a class that includes boosting and the support vector machine—single-hidden-layer neural networks with an unusual erf activation function, and deep ReLU networks. Further we extend to partially-derandomised predictors where only some of the randomness of our estimators is removed, letting us extend bounds to cases where the concentration properties of our estimators are otherwise poor.