On the Impact of Representation Sharing on Parallel Processing in Neural Network Architectures

Maximilian Mittenbühler, Sven Wientjes, Sebastian Musslick
Proceedings of the Analytical Connectionism Schools 2023--2024, PMLR 320:68-86, 2026.

Abstract

These lecture notes offer a theoretical foundation for understanding parallel processing in neural network architectures, focusing on the influence of representation sharing across tasks. Drawing on insights from the neuroscience of cognitive control, we present a computational framework for modeling the parallel execution of multiple tasks in neural systems. We review behavioral, neural, and computational evidence suggesting that while shared task representations facilitate learning across tasks, they limit a network’s ability to process those tasks simultaneously. To quantify this trade-off, we draw on tools from graph theory and analytical connectionism to examine how architectural parameters influence parallel processing capacity, and to formally link the benefits of shared representations for learning with their limitations for parallel processing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v320-mittenbuhler26a, title = {On the Impact of Representation Sharing on Parallel Processing in Neural Network Architectures}, author = {Mittenb\"{u}hler, Maximilian and Wientjes, Sven and Musslick, Sebastian}, booktitle = {Proceedings of the Analytical Connectionism Schools 2023--2024}, pages = {68--86}, year = {2026}, editor = {Sarao Mannelli, Stefano and Mignacco, Francesca and Chou, Chi-Ning and Chung, SueYeon and Saxe, Andrew}, volume = {320}, series = {Proceedings of Machine Learning Research}, month = {01 Jan--31 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v320/main/assets/mittenbuhler26a/mittenbuhler26a.pdf}, url = {https://proceedings.mlr.press/v320/mittenbuhler26a.html}, abstract = {These lecture notes offer a theoretical foundation for understanding parallel processing in neural network architectures, focusing on the influence of representation sharing across tasks. Drawing on insights from the neuroscience of cognitive control, we present a computational framework for modeling the parallel execution of multiple tasks in neural systems. We review behavioral, neural, and computational evidence suggesting that while shared task representations facilitate learning across tasks, they limit a network’s ability to process those tasks simultaneously. To quantify this trade-off, we draw on tools from graph theory and analytical connectionism to examine how architectural parameters influence parallel processing capacity, and to formally link the benefits of shared representations for learning with their limitations for parallel processing.} }
Endnote
%0 Conference Paper %T On the Impact of Representation Sharing on Parallel Processing in Neural Network Architectures %A Maximilian Mittenbühler %A Sven Wientjes %A Sebastian Musslick %B Proceedings of the Analytical Connectionism Schools 2023--2024 %C Proceedings of Machine Learning Research %D 2026 %E Stefano Sarao Mannelli %E Francesca Mignacco %E Chi-Ning Chou %E SueYeon Chung %E Andrew Saxe %F pmlr-v320-mittenbuhler26a %I PMLR %P 68--86 %U https://proceedings.mlr.press/v320/mittenbuhler26a.html %V 320 %X These lecture notes offer a theoretical foundation for understanding parallel processing in neural network architectures, focusing on the influence of representation sharing across tasks. Drawing on insights from the neuroscience of cognitive control, we present a computational framework for modeling the parallel execution of multiple tasks in neural systems. We review behavioral, neural, and computational evidence suggesting that while shared task representations facilitate learning across tasks, they limit a network’s ability to process those tasks simultaneously. To quantify this trade-off, we draw on tools from graph theory and analytical connectionism to examine how architectural parameters influence parallel processing capacity, and to formally link the benefits of shared representations for learning with their limitations for parallel processing.
APA
Mittenbühler, M., Wientjes, S. & Musslick, S.. (2026). On the Impact of Representation Sharing on Parallel Processing in Neural Network Architectures. Proceedings of the Analytical Connectionism Schools 2023--2024, in Proceedings of Machine Learning Research 320:68-86 Available from https://proceedings.mlr.press/v320/mittenbuhler26a.html.

Related Material