Capturing cross-session neural population variability through self-supervised identification of consistent neuron ensembles

Justin Jude, Matthew G Perich, Lee E Miller, Matthias H Hennig
Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 197:234-257, 2023.

Abstract

Decoding stimuli or behaviour from recorded neural activity is a common approach to interrogate brain function in research, and an essential part of brain-computer and brain-machine interfaces. Reliable decoding even from small neural populations is possible because high dimensional neural population activity typically occupies low dimensional manifolds that are discoverable with suitable latent variable models. Over time however, drifts in activity of individual neurons and instabilities in neural recording devices can be substantial, making stable decoding over days and weeks impractical. While this drift cannot be predicted on an individual neuron level, population level variations over consecutive recording sessions such as differing sets of neurons and varying permutations of consistent neurons in recorded data may be learnable when the underlying manifold is stable over time. Classification of consistent versus unfamiliar neurons across sessions and accounting for deviations in the order of consistent recording neurons across sessions of recordings may then maintain decoding performance and uncover a task-related neural manifold. Here we show that self-supervised training of a deep neural network can be used to compensate for this inter-session variability. As a result, a sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future. Our approach only requires a single recording session for training the model, and is a step towards reliable, recalibration-free brain computer interfaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v197-jude23a, title = {Capturing cross-session neural population variability through self-supervised identification of consistent neuron ensembles}, author = {Jude, Justin and Perich, Matthew G and Miller, Lee E and Hennig, Matthias H}, booktitle = {Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {234--257}, year = {2023}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Di Bernardo, Arianna and Miolane, Nina}, volume = {197}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v197/jude23a/jude23a.pdf}, url = {https://proceedings.mlr.press/v197/jude23a.html}, abstract = {Decoding stimuli or behaviour from recorded neural activity is a common approach to interrogate brain function in research, and an essential part of brain-computer and brain-machine interfaces. Reliable decoding even from small neural populations is possible because high dimensional neural population activity typically occupies low dimensional manifolds that are discoverable with suitable latent variable models. Over time however, drifts in activity of individual neurons and instabilities in neural recording devices can be substantial, making stable decoding over days and weeks impractical. While this drift cannot be predicted on an individual neuron level, population level variations over consecutive recording sessions such as differing sets of neurons and varying permutations of consistent neurons in recorded data may be learnable when the underlying manifold is stable over time. Classification of consistent versus unfamiliar neurons across sessions and accounting for deviations in the order of consistent recording neurons across sessions of recordings may then maintain decoding performance and uncover a task-related neural manifold. Here we show that self-supervised training of a deep neural network can be used to compensate for this inter-session variability. As a result, a sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future. Our approach only requires a single recording session for training the model, and is a step towards reliable, recalibration-free brain computer interfaces.} }
Endnote
%0 Conference Paper %T Capturing cross-session neural population variability through self-supervised identification of consistent neuron ensembles %A Justin Jude %A Matthew G Perich %A Lee E Miller %A Matthias H Hennig %B Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2023 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Arianna Di Bernardo %E Nina Miolane %F pmlr-v197-jude23a %I PMLR %P 234--257 %U https://proceedings.mlr.press/v197/jude23a.html %V 197 %X Decoding stimuli or behaviour from recorded neural activity is a common approach to interrogate brain function in research, and an essential part of brain-computer and brain-machine interfaces. Reliable decoding even from small neural populations is possible because high dimensional neural population activity typically occupies low dimensional manifolds that are discoverable with suitable latent variable models. Over time however, drifts in activity of individual neurons and instabilities in neural recording devices can be substantial, making stable decoding over days and weeks impractical. While this drift cannot be predicted on an individual neuron level, population level variations over consecutive recording sessions such as differing sets of neurons and varying permutations of consistent neurons in recorded data may be learnable when the underlying manifold is stable over time. Classification of consistent versus unfamiliar neurons across sessions and accounting for deviations in the order of consistent recording neurons across sessions of recordings may then maintain decoding performance and uncover a task-related neural manifold. Here we show that self-supervised training of a deep neural network can be used to compensate for this inter-session variability. As a result, a sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future. Our approach only requires a single recording session for training the model, and is a step towards reliable, recalibration-free brain computer interfaces.
APA
Jude, J., Perich, M.G., Miller, L.E. & Hennig, M.H.. (2023). Capturing cross-session neural population variability through self-supervised identification of consistent neuron ensembles. Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 197:234-257 Available from https://proceedings.mlr.press/v197/jude23a.html.

Related Material