Conditionally Gaussian PAC-Bayes

Eugenio Clerico, George Deligiannidis, Arnaud Doucet
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2311-2329, 2022.

Abstract

Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-clerico22a, title = { Conditionally Gaussian PAC-Bayes }, author = {Clerico, Eugenio and Deligiannidis, George and Doucet, Arnaud}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {2311--2329}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/clerico22a/clerico22a.pdf}, url = {https://proceedings.mlr.press/v151/clerico22a.html}, abstract = { Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods. } }
Endnote
%0 Conference Paper %T Conditionally Gaussian PAC-Bayes %A Eugenio Clerico %A George Deligiannidis %A Arnaud Doucet %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-clerico22a %I PMLR %P 2311--2329 %U https://proceedings.mlr.press/v151/clerico22a.html %V 151 %X Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods.
APA
Clerico, E., Deligiannidis, G. & Doucet, A.. (2022). Conditionally Gaussian PAC-Bayes . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:2311-2329 Available from https://proceedings.mlr.press/v151/clerico22a.html.

Related Material