<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Proceedings of Machine Learning Research</title>
    <description>Proceedings of the Analytical Connectionism Schools 2023--2024
  Held in London, UK and New York, USA on 01 January to 31 December 2024

Published as Volume 320 by the Proceedings of Machine Learning Research on 06 April 2026.

Volume Edited by:
  Stefano Sarao Mannelli
  Francesca Mignacco
  Chi-Ning Chou
  SueYeon Chung
  Andrew Saxe

Series Editors:
  Neil D. Lawrence
</description>
    <link>https://proceedings.mlr.press/v320/</link>
    <atom:link href="https://proceedings.mlr.press/v320/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Mon, 06 Apr 2026 07:19:17 +0000</pubDate>
    <lastBuildDate>Mon, 06 Apr 2026 07:19:17 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>Natural Image Statistics, Visual Representation, and Denoising</title>
        <description>This article, gathered and elaborated from a lecture by Eero Simoncelli at the 2024 Analytical Connectionism Summer School, reviews several approaches for modeling the probabilistic distribution of natural images and their interaction with the problem of image denoising. The lecture starts with the Gaussian spectral model of the 1950s as a conceptual foundation and quantitative baseline, followed by sparse coding models which took hold in the 1990s. These statistical models of natural images can be used as prior probability distributions for solving inverse problems such as denoising, using a Bayesian framework. Finally, the lecture describes recent work in machine learning in which the process of constructing a denoiser is reversed: a neural network is trained to solve the denoising problem without first specifying a prior distribution, and this trained network is subsequently used as an implicit model of the distribution of natural images. Images can be drawn from this implicit model through a reverse diffusion process, and the model can also be used to solve inference problems. This allows researchers to investigate the extent to which these DNNs are generalizing beyond their training data (as necessary for accurately modeling the distribution of natural images) as opposed to memorizing the images they were trained on.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/thobani26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/thobani26a.html</guid>
        
        
      </item>
    
      <item>
        <title>Thinking of Neural Networks Like a Physicist: The Statistical Physics of Machine Learning</title>
        <description>Machine learning (ML) enables us to uncover patterns from data and generalize this information to new, unseen examples. The rapid development of the field has transformed not only classical computer science domains—such as computer vision, natural language processing, and speech recognition—but has also begun to reshape scientific research more broadly, including psychology and neuroscience. This paper presents a pedagogical introduction to an emerging line of research that seeks to interpret ML systems by &quot; thinking like a physicist” presented by Florent Krzakala at Analytical Connectionism 2023. In particular, the methods and intuition of statistical physics—which has a long history of studying complex systems—can be fruitfully applied to high-dimensional problems encountered in ML. First, the paper presents applications of statistical physics techniques to unsupervised machine learning, in which patterns are found in data without any supervisory signal. The replica method – an important approximation that allows computing the expected value of the logarithm of the problem’s likelihood ratio efficiently – greatly facilitates the analysis of classic supervised learning problems such as sparse signal denoising and clustering. The approximate message passing algorithm describes an iterative approach to solving these types of problems. Second, the paper turns to the supervised learning setting, in which ground-truth training labels are used to train a learning algorithm. It characterizes the learning dynamics of neural networks with a single hidden layer across two types of regimes. In the lazy learning regime, learning occurs only in the readout layer of the neural network with fixed embedding weights. With an infinitely wide hidden layer, this corresponds to the neural tangent kernel regime in which the network behaves linearly over its features, and can be used to characterize the possible solutions to the learning problem. Meanwhile, in the feature learning regime, learning occurs in all weights, including embeddings. The paper ends with a brief discussion of current research going beyond single-sample stochastic gradient descent, and a brief introduction to the applications of the concepts outlined in this paper to cognitive psychology.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/sandbrink26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/sandbrink26a.html</guid>
        
        
      </item>
    
      <item>
        <title>From Place Cells to Predictive Codes: Lecture Notes on the Dynamic Hippocampus</title>
        <description>These lecture notes are based on a lecture delivered by André A. Fenton and are organized around key publications from his research program. The document synthesizes these case studies to examine the shift from single-neuron place coding to ensemble-level population dynamics, oscillatory coordination, and low-dimensional manifold structure in the hippocampus. Drawing on rodent and primate evidence, it situates these findings within frameworks from dynamical systems theory and machine learning. The aim is not to provide a comprehensive review, but to articulate a conceptual perspective on hippocampal function grounded in a specific line of empirical work.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/salgado-menez26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/salgado-menez26a.html</guid>
        
        
      </item>
    
      <item>
        <title>On the Impact of Representation Sharing on Parallel Processing in Neural Network Architectures</title>
        <description>These lecture notes offer a theoretical foundation for understanding parallel processing in neural network architectures, focusing on the influence of representation sharing across tasks. Drawing on insights from the neuroscience of cognitive control, we present a computational framework for modeling the parallel execution of multiple tasks in neural systems. We review behavioral, neural, and computational evidence suggesting that while shared task representations facilitate learning across tasks, they limit a network’s ability to process those tasks simultaneously. To quantify this trade-off, we draw on tools from graph theory and analytical connectionism to examine how architectural parameters influence parallel processing capacity, and to formally link the benefits of shared representations for learning with their limitations for parallel processing.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/mittenbuhler26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/mittenbuhler26a.html</guid>
        
        
      </item>
    
      <item>
        <title>Unifying neural population dynamics, manifold geometry, and circuit structure</title>
        <description>A central aim of neuroscience is to understand how the dynamics of neural circuits give rise to cognitive functions such as perception, attention, and decision-making. Cortical neural circuits are hierarchical and recurrent, resulting in rich temporal dynamics of individual neurons and distributed selectivity across the population. Classical neural circuit models capable of characterizing cognitive processes struggle to account for this complexity of cortical responses. Recent approaches leveraging heterogeneous neural networks address this complexity by characterizing activity in terms of interactions among latent states. In these lecture notes, we highlight recent work aimed at increasing the interpretability of these models and relating them to classical circuit models. These new analytical approaches connect neural population dynamics, geometry of latent manifolds, and the underlying circuit structure to enable mechanistic insights into cognitive processes.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/manoogian26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/manoogian26a.html</guid>
        
        
      </item>
    
      <item>
        <title>A Computational Basis of Natural Intelligence</title>
        <description>Lecture notes from Professor Jonathan Cohen at the summer school ’Analytical Connectionism’ at the Flatiron Institute in 2024. The notes discuss a computational basis for understanding natural intelligence. The notes are mainly focused on four theoretical frameworks that help explain the basis of natural intelligence: the Relational Bottleneck, the Rational Boundedness of Cognitive Control, Miller’s Law, and Episodic Generalization and Optimization.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/karami26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/karami26a.html</guid>
        
        
      </item>
    
      <item>
        <title>Models of attractor dynamics in the brain</title>
        <description>Attractor dynamics are a fundamental computational motif in neural circuits, supporting diverse cognitive functions through stable, self-sustaining patterns of neural activity. In these lecture notes, we review four key examples that demonstrate how autoassociative neural network models can elucidate the computational mechanisms underlying attractor-based information processing in biological neural systems performing cognitive functions. Drawing on empirical evidence, we explore hippocampal spatial representations, visual classification in the inferotemporal cortex, perceptual adaptation and priming, and working-memory biases shaped by sensory history. Across these domains, attractor network models reveal common computational principles and provide analytical insights into how experience shapes neural activity and behavior. Our synthesis underscores the value of attractor models as powerful tools for probing the neural basis of cognition and behavior.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/fakhoury26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/fakhoury26a.html</guid>
        
        
      </item>
    
      <item>
        <title>THE STATISTICS OF NATURAL EXPERIENCE</title>
        <description>These lecture notes present Linda Smith’s comprehensive analysis of the statistics of natural experience and its consequences for how we think about learning and intelligence. The material explores how statistics shape behavior and learning, details a developmental curriculum, examines properties of natural statistics, and investigates the dynamic coupling of parents and toddlers. Through multiple perspectives and examples, Smith offers insights into how statistical patterns in our environment influence cognitive development and learning processes. This collection is particularly valuable for machine learning readers seeking to understand the statistical foundations of human cognition and their applications to artificial intelligence systems.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/dong26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/dong26a.html</guid>
        
        
      </item>
    
      <item>
        <title>Reinforcement learning: Computational modeling of learning and decision-making</title>
        <description>Reinforcement learning (RL), as a computational modeling framework, is a formal approach to understanding and building agents, natural or artificial, that learn to make decisions based on rewards they receive from the environment. In this Lecture Notes, we begin by exploring how RL has historically been used in psychology and neuroscience to investigate reward-driven learning, before introducing it more formally from a machine learning perspective. We then demonstrate its utility in building cognitive models that explain the processes and mechanisms underlying human learning and decision-making at both the behavioral and neural levels. Finally, we discuss recent work that brings together the theory-driven approach taken by RL and the data-driven approach taken by artificial neural networks to build more predictive, yet interpretable, models of human behavior. Together, this work highlights the value of RL as a computational modeling framework for cognitive neuroscience.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/cohen26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/cohen26a.html</guid>
        
        
      </item>
    
      <item>
        <title>An Introduction to Connectionist Theories of Semantic Cognition</title>
        <description>Jay McClelland’s lectures spotlighted foundational insights and contemporary advances in neural modelling of cognition. Beginning with the premise that mental concepts correspond to patterns of activity in networked neurons, the connectionist paradigm provides mathematical models that predict and explain a plethora of cognitive phenomena. For instance, in semantic development, connectionist models that learn through gradual error-driven updates capture the progressive differentiation of concepts from broad to fine categories. This observation, and others, were captured in the early Rumelhart model and persist in today’s language models. However, there are shortcomings of simple error-based learning in neural networks, most notably the problem of catastrophic interference, wherein learning new information disrupts previously acquired knowledge. Biological solutions to this problem may reveal additional structures in our brains. For example, in the complementary learning systems framework, the hippocampus rapidly stores episodic experiences while the neocortex integrates them over time, thus mitigating interference and enabling flexible knowledge consolidation. Furthermore, existing schemas facilitate faster acquisition of related concepts, reflecting how prior knowledge shapes learning efficiency. Returning to the phenomena observed in semantic development, theoretical work by Saxe, McClelland and Ganguli provides exact analytical solutions, showing how, for instance, stage-like learning trajectories and transient &quot;illusory correlations&quot; arise from the interaction between the statistical regularities of the environment and nonlinear learning dynamics in a deep neural network. Taken together, these lectures underscored the enduring value of connectionism in bridging psychology, neuroscience, and machine learning.</description>
        <pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate>
        <link>https://proceedings.mlr.press/v320/benjamin26a.html</link>
        <guid isPermaLink="true">https://proceedings.mlr.press/v320/benjamin26a.html</guid>
        
        
      </item>
    
  </channel>
</rss>
