On Margins and Derandomisation in PAC-Bayes

Felix Biggs, Benjamin Guedj
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3709-3731, 2022.

Abstract

We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop straightforwardly lead to margin bounds for various classifiers, including linear prediction—a class that includes boosting and the support vector machine—single-hidden-layer neural networks with an unusual erf activation function, and deep ReLU networks. Further we extend to partially-derandomised predictors where only some of the randomness of our estimators is removed, letting us extend bounds to cases where the concentration properties of our estimators are otherwise poor.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-biggs22a, title = { On Margins and Derandomisation in PAC-Bayes }, author = {Biggs, Felix and Guedj, Benjamin}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {3709--3731}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/biggs22a/biggs22a.pdf}, url = {https://proceedings.mlr.press/v151/biggs22a.html}, abstract = { We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop straightforwardly lead to margin bounds for various classifiers, including linear prediction—a class that includes boosting and the support vector machine—single-hidden-layer neural networks with an unusual erf activation function, and deep ReLU networks. Further we extend to partially-derandomised predictors where only some of the randomness of our estimators is removed, letting us extend bounds to cases where the concentration properties of our estimators are otherwise poor. } }
Endnote
%0 Conference Paper %T On Margins and Derandomisation in PAC-Bayes %A Felix Biggs %A Benjamin Guedj %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-biggs22a %I PMLR %P 3709--3731 %U https://proceedings.mlr.press/v151/biggs22a.html %V 151 %X We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value. The tools we develop straightforwardly lead to margin bounds for various classifiers, including linear prediction—a class that includes boosting and the support vector machine—single-hidden-layer neural networks with an unusual erf activation function, and deep ReLU networks. Further we extend to partially-derandomised predictors where only some of the randomness of our estimators is removed, letting us extend bounds to cases where the concentration properties of our estimators are otherwise poor.
APA
Biggs, F. & Guedj, B.. (2022). On Margins and Derandomisation in PAC-Bayes . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:3709-3731 Available from https://proceedings.mlr.press/v151/biggs22a.html.

Related Material