[edit]
On the Impact of Representation Sharing on Parallel Processing in Neural Network Architectures
Proceedings of the Analytical Connectionism Schools 2023--2024, PMLR 320:68-86, 2026.
Abstract
These lecture notes offer a theoretical foundation for understanding parallel processing in neural network architectures, focusing on the influence of representation sharing across tasks. Drawing on insights from the neuroscience of cognitive control, we present a computational framework for modeling the parallel execution of multiple tasks in neural systems. We review behavioral, neural, and computational evidence suggesting that while shared task representations facilitate learning across tasks, they limit a network’s ability to process those tasks simultaneously. To quantify this trade-off, we draw on tools from graph theory and analytical connectionism to examine how architectural parameters influence parallel processing capacity, and to formally link the benefits of shared representations for learning with their limitations for parallel processing.