A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing

Chengrui Li, Weihan Li, Yule Wang, Anqi Wu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:28095-28111, 2024.

Abstract

The partially observable generalized linear model (POGLM) is a powerful tool for understanding neural connectivities under the assumption of existing hidden neurons. With spike trains only recorded from visible neurons, existing works use variational inference to learn POGLM meanwhile presenting the difficulty of learning this latent variable model. There are two main issues: (1) the sampled Poisson hidden spike count hinders the use of the pathwise gradient estimator in VI; and (2) the existing design of the variational model is neither expressive nor time-efficient, which further affects the performance. For (1), we propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works. For (2), we propose the forward-backward message-passing sampling scheme for the variational model. Comprehensive experiments show that our differentiable POGLMs with our forward-backward message passing produce a better performance on one synthetic and two real-world datasets. Furthermore, our new method yields more interpretable parameters, underscoring its significance in neuroscience.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24aj, title = {A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing}, author = {Li, Chengrui and Li, Weihan and Wang, Yule and Wu, Anqi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {28095--28111}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24aj/li24aj.pdf}, url = {https://proceedings.mlr.press/v235/li24aj.html}, abstract = {The partially observable generalized linear model (POGLM) is a powerful tool for understanding neural connectivities under the assumption of existing hidden neurons. With spike trains only recorded from visible neurons, existing works use variational inference to learn POGLM meanwhile presenting the difficulty of learning this latent variable model. There are two main issues: (1) the sampled Poisson hidden spike count hinders the use of the pathwise gradient estimator in VI; and (2) the existing design of the variational model is neither expressive nor time-efficient, which further affects the performance. For (1), we propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works. For (2), we propose the forward-backward message-passing sampling scheme for the variational model. Comprehensive experiments show that our differentiable POGLMs with our forward-backward message passing produce a better performance on one synthetic and two real-world datasets. Furthermore, our new method yields more interpretable parameters, underscoring its significance in neuroscience.} }
Endnote
%0 Conference Paper %T A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing %A Chengrui Li %A Weihan Li %A Yule Wang %A Anqi Wu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24aj %I PMLR %P 28095--28111 %U https://proceedings.mlr.press/v235/li24aj.html %V 235 %X The partially observable generalized linear model (POGLM) is a powerful tool for understanding neural connectivities under the assumption of existing hidden neurons. With spike trains only recorded from visible neurons, existing works use variational inference to learn POGLM meanwhile presenting the difficulty of learning this latent variable model. There are two main issues: (1) the sampled Poisson hidden spike count hinders the use of the pathwise gradient estimator in VI; and (2) the existing design of the variational model is neither expressive nor time-efficient, which further affects the performance. For (1), we propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works. For (2), we propose the forward-backward message-passing sampling scheme for the variational model. Comprehensive experiments show that our differentiable POGLMs with our forward-backward message passing produce a better performance on one synthetic and two real-world datasets. Furthermore, our new method yields more interpretable parameters, underscoring its significance in neuroscience.
APA
Li, C., Li, W., Wang, Y. & Wu, A.. (2024). A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:28095-28111 Available from https://proceedings.mlr.press/v235/li24aj.html.

Related Material