[edit]
An Introduction to Connectionist Theories of Semantic Cognition
Proceedings of the Analytical Connectionism Schools 2023--2024, PMLR 320:42-67, 2026.
Abstract
Jay McClelland’s lectures spotlighted foundational insights and contemporary advances in neural modelling of cognition. Beginning with the premise that mental concepts correspond to patterns of activity in networked neurons, the connectionist paradigm provides mathematical models that predict and explain a plethora of cognitive phenomena. For instance, in semantic development, connectionist models that learn through gradual error-driven updates capture the progressive differentiation of concepts from broad to fine categories. This observation, and others, were captured in the early Rumelhart model and persist in today’s language models. However, there are shortcomings of simple error-based learning in neural networks, most notably the problem of catastrophic interference, wherein learning new information disrupts previously acquired knowledge. Biological solutions to this problem may reveal additional structures in our brains. For example, in the complementary learning systems framework, the hippocampus rapidly stores episodic experiences while the neocortex integrates them over time, thus mitigating interference and enabling flexible knowledge consolidation. Furthermore, existing schemas facilitate faster acquisition of related concepts, reflecting how prior knowledge shapes learning efficiency. Returning to the phenomena observed in semantic development, theoretical work by Saxe, McClelland and Ganguli provides exact analytical solutions, showing how, for instance, stage-like learning trajectories and transient "illusory correlations" arise from the interaction between the statistical regularities of the environment and nonlinear learning dynamics in a deep neural network. Taken together, these lectures underscored the enduring value of connectionism in bridging psychology, neuroscience, and machine learning.