An Introduction to Connectionist Theories of Semantic Cognition

Ari S Benjamin, Anna-Lea Beyer, Marianne De Heer Kloots, Jaedong Hwang, Hajer Karoui, Mitchell Ostrow, Jirko Rubruck, Kai Jappe Sandbrink, Satchel Grant, Andrew M Saxe, James Lloyd McClelland
Proceedings of the Analytical Connectionism Schools 2023--2024, PMLR 320:42-67, 2026.

Abstract

Jay McClelland’s lectures spotlighted foundational insights and contemporary advances in neural modelling of cognition. Beginning with the premise that mental concepts correspond to patterns of activity in networked neurons, the connectionist paradigm provides mathematical models that predict and explain a plethora of cognitive phenomena. For instance, in semantic development, connectionist models that learn through gradual error-driven updates capture the progressive differentiation of concepts from broad to fine categories. This observation, and others, were captured in the early Rumelhart model and persist in today’s language models. However, there are shortcomings of simple error-based learning in neural networks, most notably the problem of catastrophic interference, wherein learning new information disrupts previously acquired knowledge. Biological solutions to this problem may reveal additional structures in our brains. For example, in the complementary learning systems framework, the hippocampus rapidly stores episodic experiences while the neocortex integrates them over time, thus mitigating interference and enabling flexible knowledge consolidation. Furthermore, existing schemas facilitate faster acquisition of related concepts, reflecting how prior knowledge shapes learning efficiency. Returning to the phenomena observed in semantic development, theoretical work by Saxe, McClelland and Ganguli provides exact analytical solutions, showing how, for instance, stage-like learning trajectories and transient "illusory correlations" arise from the interaction between the statistical regularities of the environment and nonlinear learning dynamics in a deep neural network. Taken together, these lectures underscored the enduring value of connectionism in bridging psychology, neuroscience, and machine learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v320-benjamin26a, title = {An Introduction to Connectionist Theories of Semantic Cognition}, author = {Benjamin, Ari S and Beyer, Anna-Lea and Kloots, Marianne De Heer and Hwang, Jaedong and Karoui, Hajer and Ostrow, Mitchell and Rubruck, Jirko and Sandbrink, Kai Jappe and Grant, Satchel and Saxe, Andrew M and McClelland, James Lloyd}, booktitle = {Proceedings of the Analytical Connectionism Schools 2023--2024}, pages = {42--67}, year = {2026}, editor = {Sarao Mannelli, Stefano and Mignacco, Francesca and Chou, Chi-Ning and Chung, SueYeon and Saxe, Andrew}, volume = {320}, series = {Proceedings of Machine Learning Research}, month = {01 Jan--31 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v320/main/assets/benjamin26a/benjamin26a.pdf}, url = {https://proceedings.mlr.press/v320/benjamin26a.html}, abstract = {Jay McClelland’s lectures spotlighted foundational insights and contemporary advances in neural modelling of cognition. Beginning with the premise that mental concepts correspond to patterns of activity in networked neurons, the connectionist paradigm provides mathematical models that predict and explain a plethora of cognitive phenomena. For instance, in semantic development, connectionist models that learn through gradual error-driven updates capture the progressive differentiation of concepts from broad to fine categories. This observation, and others, were captured in the early Rumelhart model and persist in today’s language models. However, there are shortcomings of simple error-based learning in neural networks, most notably the problem of catastrophic interference, wherein learning new information disrupts previously acquired knowledge. Biological solutions to this problem may reveal additional structures in our brains. For example, in the complementary learning systems framework, the hippocampus rapidly stores episodic experiences while the neocortex integrates them over time, thus mitigating interference and enabling flexible knowledge consolidation. Furthermore, existing schemas facilitate faster acquisition of related concepts, reflecting how prior knowledge shapes learning efficiency. Returning to the phenomena observed in semantic development, theoretical work by Saxe, McClelland and Ganguli provides exact analytical solutions, showing how, for instance, stage-like learning trajectories and transient "illusory correlations" arise from the interaction between the statistical regularities of the environment and nonlinear learning dynamics in a deep neural network. Taken together, these lectures underscored the enduring value of connectionism in bridging psychology, neuroscience, and machine learning.} }
Endnote
%0 Conference Paper %T An Introduction to Connectionist Theories of Semantic Cognition %A Ari S Benjamin %A Anna-Lea Beyer %A Marianne De Heer Kloots %A Jaedong Hwang %A Hajer Karoui %A Mitchell Ostrow %A Jirko Rubruck %A Kai Jappe Sandbrink %A Satchel Grant %A Andrew M Saxe %A James Lloyd McClelland %B Proceedings of the Analytical Connectionism Schools 2023--2024 %C Proceedings of Machine Learning Research %D 2026 %E Stefano Sarao Mannelli %E Francesca Mignacco %E Chi-Ning Chou %E SueYeon Chung %E Andrew Saxe %F pmlr-v320-benjamin26a %I PMLR %P 42--67 %U https://proceedings.mlr.press/v320/benjamin26a.html %V 320 %X Jay McClelland’s lectures spotlighted foundational insights and contemporary advances in neural modelling of cognition. Beginning with the premise that mental concepts correspond to patterns of activity in networked neurons, the connectionist paradigm provides mathematical models that predict and explain a plethora of cognitive phenomena. For instance, in semantic development, connectionist models that learn through gradual error-driven updates capture the progressive differentiation of concepts from broad to fine categories. This observation, and others, were captured in the early Rumelhart model and persist in today’s language models. However, there are shortcomings of simple error-based learning in neural networks, most notably the problem of catastrophic interference, wherein learning new information disrupts previously acquired knowledge. Biological solutions to this problem may reveal additional structures in our brains. For example, in the complementary learning systems framework, the hippocampus rapidly stores episodic experiences while the neocortex integrates them over time, thus mitigating interference and enabling flexible knowledge consolidation. Furthermore, existing schemas facilitate faster acquisition of related concepts, reflecting how prior knowledge shapes learning efficiency. Returning to the phenomena observed in semantic development, theoretical work by Saxe, McClelland and Ganguli provides exact analytical solutions, showing how, for instance, stage-like learning trajectories and transient "illusory correlations" arise from the interaction between the statistical regularities of the environment and nonlinear learning dynamics in a deep neural network. Taken together, these lectures underscored the enduring value of connectionism in bridging psychology, neuroscience, and machine learning.
APA
Benjamin, A.S., Beyer, A., Kloots, M.D.H., Hwang, J., Karoui, H., Ostrow, M., Rubruck, J., Sandbrink, K.J., Grant, S., Saxe, A.M. & McClelland, J.L.. (2026). An Introduction to Connectionist Theories of Semantic Cognition. Proceedings of the Analytical Connectionism Schools 2023--2024, in Proceedings of Machine Learning Research 320:42-67 Available from https://proceedings.mlr.press/v320/benjamin26a.html.

Related Material