Fast Marginal Likelihood Maximisation for Sparse Bayesian Models

Michael E. Tipping, Anita C. Faul
Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, PMLR R4:276-283, 2003.

Abstract

The ’sparse Bayesian’ modelling approach, as exemplified by the ’relevance vector machine’, enables sparse classification and regression functions to be obtained by linearlyweighting a small number of fixed basis functions from a large dictionary of potential candidates. Such a model conveys a number of advantages over the related and very popular ’support vector machine’, but the necessary ’training’ procedure - optimisation of the marginal likelihood function is typically much slower. We describe a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via a principled and efficient sequential addition and deletion of candidate basis functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR4-tipping03a, title = {Fast Marginal Likelihood Maximisation for Sparse Bayesian Models}, author = {Tipping, Michael E. and Faul, Anita C.}, booktitle = {Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics}, pages = {276--283}, year = {2003}, editor = {Bishop, Christopher M. and Frey, Brendan J.}, volume = {R4}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/r4/tipping03a/tipping03a.pdf}, url = {https://proceedings.mlr.press/r4/tipping03a.html}, abstract = {The ’sparse Bayesian’ modelling approach, as exemplified by the ’relevance vector machine’, enables sparse classification and regression functions to be obtained by linearlyweighting a small number of fixed basis functions from a large dictionary of potential candidates. Such a model conveys a number of advantages over the related and very popular ’support vector machine’, but the necessary ’training’ procedure - optimisation of the marginal likelihood function is typically much slower. We describe a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via a principled and efficient sequential addition and deletion of candidate basis functions.}, note = {Reissued by PMLR on 01 April 2021.} }
Endnote
%0 Conference Paper %T Fast Marginal Likelihood Maximisation for Sparse Bayesian Models %A Michael E. Tipping %A Anita C. Faul %B Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2003 %E Christopher M. Bishop %E Brendan J. Frey %F pmlr-vR4-tipping03a %I PMLR %P 276--283 %U https://proceedings.mlr.press/r4/tipping03a.html %V R4 %X The ’sparse Bayesian’ modelling approach, as exemplified by the ’relevance vector machine’, enables sparse classification and regression functions to be obtained by linearlyweighting a small number of fixed basis functions from a large dictionary of potential candidates. Such a model conveys a number of advantages over the related and very popular ’support vector machine’, but the necessary ’training’ procedure - optimisation of the marginal likelihood function is typically much slower. We describe a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via a principled and efficient sequential addition and deletion of candidate basis functions. %Z Reissued by PMLR on 01 April 2021.
APA
Tipping, M.E. & Faul, A.C.. (2003). Fast Marginal Likelihood Maximisation for Sparse Bayesian Models. Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R4:276-283 Available from https://proceedings.mlr.press/r4/tipping03a.html. Reissued by PMLR on 01 April 2021.

Related Material