Visualizing and sonifying how an artificial ear hears music

Vincent Herrmann
Proceedings of the NeurIPS 2019 Competition and Demonstration Track, PMLR 123:192-202, 2020.

Abstract

A system is presented that visualizes and sonifies the inner workings of a sound processing neural network in real-time. The models that are employed have been trained on music datasets in a self-supervised way using contrastive predictive coding. An optimization procedure generates sounds that activate certain regions in the network. That way it can be rendered audible how music sounds to this artificial ear. In addition, the activations of the neurons at each point in time are visualized. For this, a force graph layout technique is used to create a vivid and dynamic representation of the neural network in action.

Cite this Paper


BibTeX
@InProceedings{pmlr-v123-herrmann20a, title = {Visualizing and sonifying how an artificial ear hears music}, author = {Herrmann, Vincent}, booktitle = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track}, pages = {192--202}, year = {2020}, editor = {Escalante, Hugo Jair and Hadsell, Raia}, volume = {123}, series = {Proceedings of Machine Learning Research}, month = {08--14 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v123/herrmann20a/herrmann20a.pdf}, url = {https://proceedings.mlr.press/v123/herrmann20a.html}, abstract = { A system is presented that visualizes and sonifies the inner workings of a sound processing neural network in real-time. The models that are employed have been trained on music datasets in a self-supervised way using contrastive predictive coding. An optimization procedure generates sounds that activate certain regions in the network. That way it can be rendered audible how music sounds to this artificial ear. In addition, the activations of the neurons at each point in time are visualized. For this, a force graph layout technique is used to create a vivid and dynamic representation of the neural network in action.} }
Endnote
%0 Conference Paper %T Visualizing and sonifying how an artificial ear hears music %A Vincent Herrmann %B Proceedings of the NeurIPS 2019 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2020 %E Hugo Jair Escalante %E Raia Hadsell %F pmlr-v123-herrmann20a %I PMLR %P 192--202 %U https://proceedings.mlr.press/v123/herrmann20a.html %V 123 %X A system is presented that visualizes and sonifies the inner workings of a sound processing neural network in real-time. The models that are employed have been trained on music datasets in a self-supervised way using contrastive predictive coding. An optimization procedure generates sounds that activate certain regions in the network. That way it can be rendered audible how music sounds to this artificial ear. In addition, the activations of the neurons at each point in time are visualized. For this, a force graph layout technique is used to create a vivid and dynamic representation of the neural network in action.
APA
Herrmann, V.. (2020). Visualizing and sonifying how an artificial ear hears music. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, in Proceedings of Machine Learning Research 123:192-202 Available from https://proceedings.mlr.press/v123/herrmann20a.html.

Related Material