Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music

Carlos Mariano Salcedo, Eran Egozy
Proceedings of Machine Learning Research, PMLR 303:1-14, 2026.

Abstract

We present Artificial Dancing Intelligence (ADI), an interactive neural music visualizer that is accessed through a web app, but performs inference entirely on local devices. Our approach enables anyone to create music-driven visuals while leveraging the expressive and sometimes unpredictable dynamics of self-organized systems. ADI uses an audio stream’s average energy (known as RMS) to modulate a neural cellular automata (NCA) that produces visual patterns that move and ’dance’ along with the audio stream in real-time. Through the web interface, users can adjust the relationship between the music’s energy and the NCA system to create unique visual performances out of any music audio stream. ADI achieves smooth, real-time responsiveness on modern consumer devices.

Cite this Paper


BibTeX
@InProceedings{pmlr-v303-salcedo26a, title = {Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music}, author = {Salcedo, Carlos Mariano and Egozy, Eran}, booktitle = {Proceedings of Machine Learning Research}, pages = {1--14}, year = {2026}, editor = {Herremans, Dorien and Bhandari, Keshav and Roy, Abhinaba and Colton, Simon and Barthet, Mathieu}, volume = {303}, series = {Proceedings of Machine Learning Research}, month = {26 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v303/main/assets/salcedo26a/salcedo26a.pdf}, url = {https://proceedings.mlr.press/v303/salcedo26a.html}, abstract = {We present Artificial Dancing Intelligence (ADI), an interactive neural music visualizer that is accessed through a web app, but performs inference entirely on local devices. Our approach enables anyone to create music-driven visuals while leveraging the expressive and sometimes unpredictable dynamics of self-organized systems. ADI uses an audio stream’s average energy (known as RMS) to modulate a neural cellular automata (NCA) that produces visual patterns that move and ’dance’ along with the audio stream in real-time. Through the web interface, users can adjust the relationship between the music’s energy and the NCA system to create unique visual performances out of any music audio stream. ADI achieves smooth, real-time responsiveness on modern consumer devices.} }
Endnote
%0 Conference Paper %T Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music %A Carlos Mariano Salcedo %A Eran Egozy %B Proceedings of Machine Learning Research %C Proceedings of Machine Learning Research %D 2026 %E Dorien Herremans %E Keshav Bhandari %E Abhinaba Roy %E Simon Colton %E Mathieu Barthet %F pmlr-v303-salcedo26a %I PMLR %P 1--14 %U https://proceedings.mlr.press/v303/salcedo26a.html %V 303 %X We present Artificial Dancing Intelligence (ADI), an interactive neural music visualizer that is accessed through a web app, but performs inference entirely on local devices. Our approach enables anyone to create music-driven visuals while leveraging the expressive and sometimes unpredictable dynamics of self-organized systems. ADI uses an audio stream’s average energy (known as RMS) to modulate a neural cellular automata (NCA) that produces visual patterns that move and ’dance’ along with the audio stream in real-time. Through the web interface, users can adjust the relationship between the music’s energy and the NCA system to create unique visual performances out of any music audio stream. ADI achieves smooth, real-time responsiveness on modern consumer devices.
APA
Salcedo, C.M. & Egozy, E.. (2026). Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music. Proceedings of Machine Learning Research, in Proceedings of Machine Learning Research 303:1-14 Available from https://proceedings.mlr.press/v303/salcedo26a.html.

Related Material