Handling Sparsity via the Horseshoe


Carlos M. Carvalho, Nicholas G. Polson, James G. Scott ;
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:73-80, 2009.


This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justifed theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives.

Related Material