Geometry of abstract learned knowledge in deep RL agents

James Mochizuki-Freeman, Md Rysul Kabir, Mitesh Gulecha, Zoran Tiganj
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:405-424, 2024.

Abstract

Data from neural recordings suggest that mammalian brains represent physical and abstract task-relevant variables through low-dimensional neural manifolds. In a recent electrophysiological study (Nieh et al., 2021), mice performed an evidence accumulation task while moving along a virtual track. Nonlinear dimensionality reduction of the population activity revealed that task-relevant variables were jointly mapped in an orderly manner in the low-dimensional space. Here we trained deep reinforcement learning (RL) agents on the same evidence accumulation task and found that their neural activity can be described with a low-dimensional manifold spanned by task-relevant variables. These results provide further insight into similarities and differences between neural dynamics in mammals and deep RL agents. Furthermore, we showed that manifold learning can be used to characterize the representational space of the RL agents with the potential to improve the interpretability of decision-making in RL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v228-mochizuki-freeman24a, title = {Geometry of abstract learned knowledge in deep RL agents}, author = {Mochizuki-Freeman, James and Kabir, Md Rysul and Gulecha, Mitesh and Tiganj, Zoran}, booktitle = {Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {405--424}, year = {2024}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Miolane, Nina}, volume = {228}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v228/main/assets/mochizuki-freeman24a/mochizuki-freeman24a.pdf}, url = {https://proceedings.mlr.press/v228/mochizuki-freeman24a.html}, abstract = {Data from neural recordings suggest that mammalian brains represent physical and abstract task-relevant variables through low-dimensional neural manifolds. In a recent electrophysiological study (Nieh et al., 2021), mice performed an evidence accumulation task while moving along a virtual track. Nonlinear dimensionality reduction of the population activity revealed that task-relevant variables were jointly mapped in an orderly manner in the low-dimensional space. Here we trained deep reinforcement learning (RL) agents on the same evidence accumulation task and found that their neural activity can be described with a low-dimensional manifold spanned by task-relevant variables. These results provide further insight into similarities and differences between neural dynamics in mammals and deep RL agents. Furthermore, we showed that manifold learning can be used to characterize the representational space of the RL agents with the potential to improve the interpretability of decision-making in RL.} }
Endnote
%0 Conference Paper %T Geometry of abstract learned knowledge in deep RL agents %A James Mochizuki-Freeman %A Md Rysul Kabir %A Mitesh Gulecha %A Zoran Tiganj %B Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2024 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Nina Miolane %F pmlr-v228-mochizuki-freeman24a %I PMLR %P 405--424 %U https://proceedings.mlr.press/v228/mochizuki-freeman24a.html %V 228 %X Data from neural recordings suggest that mammalian brains represent physical and abstract task-relevant variables through low-dimensional neural manifolds. In a recent electrophysiological study (Nieh et al., 2021), mice performed an evidence accumulation task while moving along a virtual track. Nonlinear dimensionality reduction of the population activity revealed that task-relevant variables were jointly mapped in an orderly manner in the low-dimensional space. Here we trained deep reinforcement learning (RL) agents on the same evidence accumulation task and found that their neural activity can be described with a low-dimensional manifold spanned by task-relevant variables. These results provide further insight into similarities and differences between neural dynamics in mammals and deep RL agents. Furthermore, we showed that manifold learning can be used to characterize the representational space of the RL agents with the potential to improve the interpretability of decision-making in RL.
APA
Mochizuki-Freeman, J., Kabir, M.R., Gulecha, M. & Tiganj, Z.. (2024). Geometry of abstract learned knowledge in deep RL agents. Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 228:405-424 Available from https://proceedings.mlr.press/v228/mochizuki-freeman24a.html.

Related Material