Is Gibbs sampling faster than Hamiltonian Monte Carlo on GLMs?

Son Luu, Zuheng Xu, Nikola Surjanovic, Miguel Biron-Lattes, Trevor Campbell, Alexandre Bouchard-Cote
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2881-2889, 2025.

Abstract

The Hamiltonian Monte Carlo (HMC) algorithm is often lauded for its ability to effectively sample from high-dimensional distributions. In this paper we challenge the presumed domination of HMC for the Bayesian analysis of GLMs. By utilizing the structure of the compute graph rather than the graphical model, we show a reduction of the time per sweep of a full-scan Gibbs sampler from $O(d^2)$ to $O(d)$, where $d$ is the number of GLM parameters. A simple change to the implementation of the Gibbs sampler allow us to perform Bayesian inference on high-dimensional GLMs that are practically infeasible with traditional Gibbs sampler implementations. We empirically demonstrate a substantial increase in effective sample size per time when comparing our Gibbs algorithms to state-of-the-art HMC algorithms. While Gibbs is superior in terms of dimension scaling, neither Gibbs nor HMC dominate the other: we provide numerical and theoretical evidence that HMC retains an edge in certain circumstances thanks to its advantageous condition number scaling. Interestingly, for GLMs of fixed data size, we observe that increasing dimensionality can stabilize or even decrease condition number, shedding light on the empirical advantage of our efficient Gibbs sampler.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-luu25a, title = {Is Gibbs sampling faster than Hamiltonian Monte Carlo on GLMs?}, author = {Luu, Son and Xu, Zuheng and Surjanovic, Nikola and Biron-Lattes, Miguel and Campbell, Trevor and Bouchard-Cote, Alexandre}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2881--2889}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/luu25a/luu25a.pdf}, url = {https://proceedings.mlr.press/v258/luu25a.html}, abstract = {The Hamiltonian Monte Carlo (HMC) algorithm is often lauded for its ability to effectively sample from high-dimensional distributions. In this paper we challenge the presumed domination of HMC for the Bayesian analysis of GLMs. By utilizing the structure of the compute graph rather than the graphical model, we show a reduction of the time per sweep of a full-scan Gibbs sampler from $O(d^2)$ to $O(d)$, where $d$ is the number of GLM parameters. A simple change to the implementation of the Gibbs sampler allow us to perform Bayesian inference on high-dimensional GLMs that are practically infeasible with traditional Gibbs sampler implementations. We empirically demonstrate a substantial increase in effective sample size per time when comparing our Gibbs algorithms to state-of-the-art HMC algorithms. While Gibbs is superior in terms of dimension scaling, neither Gibbs nor HMC dominate the other: we provide numerical and theoretical evidence that HMC retains an edge in certain circumstances thanks to its advantageous condition number scaling. Interestingly, for GLMs of fixed data size, we observe that increasing dimensionality can stabilize or even decrease condition number, shedding light on the empirical advantage of our efficient Gibbs sampler.} }
Endnote
%0 Conference Paper %T Is Gibbs sampling faster than Hamiltonian Monte Carlo on GLMs? %A Son Luu %A Zuheng Xu %A Nikola Surjanovic %A Miguel Biron-Lattes %A Trevor Campbell %A Alexandre Bouchard-Cote %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-luu25a %I PMLR %P 2881--2889 %U https://proceedings.mlr.press/v258/luu25a.html %V 258 %X The Hamiltonian Monte Carlo (HMC) algorithm is often lauded for its ability to effectively sample from high-dimensional distributions. In this paper we challenge the presumed domination of HMC for the Bayesian analysis of GLMs. By utilizing the structure of the compute graph rather than the graphical model, we show a reduction of the time per sweep of a full-scan Gibbs sampler from $O(d^2)$ to $O(d)$, where $d$ is the number of GLM parameters. A simple change to the implementation of the Gibbs sampler allow us to perform Bayesian inference on high-dimensional GLMs that are practically infeasible with traditional Gibbs sampler implementations. We empirically demonstrate a substantial increase in effective sample size per time when comparing our Gibbs algorithms to state-of-the-art HMC algorithms. While Gibbs is superior in terms of dimension scaling, neither Gibbs nor HMC dominate the other: we provide numerical and theoretical evidence that HMC retains an edge in certain circumstances thanks to its advantageous condition number scaling. Interestingly, for GLMs of fixed data size, we observe that increasing dimensionality can stabilize or even decrease condition number, shedding light on the empirical advantage of our efficient Gibbs sampler.
APA
Luu, S., Xu, Z., Surjanovic, N., Biron-Lattes, M., Campbell, T. & Bouchard-Cote, A.. (2025). Is Gibbs sampling faster than Hamiltonian Monte Carlo on GLMs?. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2881-2889 Available from https://proceedings.mlr.press/v258/luu25a.html.

Related Material