[edit]
Visualizing and sonifying how an artificial ear hears music
Proceedings of the NeurIPS 2019 Competition and Demonstration Track, PMLR 123:192-202, 2020.
Abstract
A system is presented that visualizes and sonifies the inner workings of a sound processing neural network in real-time. The models that are employed have been trained on music datasets in a self-supervised way using contrastive predictive coding. An optimization procedure generates sounds that activate certain regions in the network. That way it can be rendered audible how music sounds to this artificial ear. In addition, the activations of the neurons at each point in time are visualized. For this, a force graph layout technique is used to create a vivid and dynamic representation of the neural network in action.