ANAVI: Audio Noise Awareness using Visual of Indoor environments for NAVIgation

Vidhi Jain, Rishi Veerapaneni, Yonatan Bisk
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3924-3942, 2025.

Abstract

We propose Audio Noise Awareness using Visuals of Indoors for NAVIgation for quieter robot path planning. While humans are naturally aware of the noise they make and its impact on those around them, robots currently lack this awareness. A key challenge in achieving audio awareness for robots is estimating how loud will the robot’s actions be at a listener’s location? Since sound depends upon the geometry and material composition of rooms, we train the robot to passively perceive loudness using visual observations of indoor environments. To this end, we generate data on how loud an ‘impulse’ sounds at different listener locations in simulated homes, and train our Acoustic Noise Predictor (ANP). Next, we collect acoustic profiles corresponding to different actions for navigation. Unifying ANP with action acoustics, we demonstrate experiments with wheeled (Hello Robot Stretch) and legged (Unitree Go2) robots so that these robots adhere to the noise constraints of the environment. All simulated and real-world data, code and model checkpoints is released at https://anavi-corl24.github.io/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-jain25a, title = {ANAVI: Audio Noise Awareness using Visual of Indoor environments for NAVIgation}, author = {Jain, Vidhi and Veerapaneni, Rishi and Bisk, Yonatan}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3924--3942}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/jain25a/jain25a.pdf}, url = {https://proceedings.mlr.press/v270/jain25a.html}, abstract = {We propose Audio Noise Awareness using Visuals of Indoors for NAVIgation for quieter robot path planning. While humans are naturally aware of the noise they make and its impact on those around them, robots currently lack this awareness. A key challenge in achieving audio awareness for robots is estimating how loud will the robot’s actions be at a listener’s location? Since sound depends upon the geometry and material composition of rooms, we train the robot to passively perceive loudness using visual observations of indoor environments. To this end, we generate data on how loud an ‘impulse’ sounds at different listener locations in simulated homes, and train our Acoustic Noise Predictor (ANP). Next, we collect acoustic profiles corresponding to different actions for navigation. Unifying ANP with action acoustics, we demonstrate experiments with wheeled (Hello Robot Stretch) and legged (Unitree Go2) robots so that these robots adhere to the noise constraints of the environment. All simulated and real-world data, code and model checkpoints is released at https://anavi-corl24.github.io/.} }
Endnote
%0 Conference Paper %T ANAVI: Audio Noise Awareness using Visual of Indoor environments for NAVIgation %A Vidhi Jain %A Rishi Veerapaneni %A Yonatan Bisk %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-jain25a %I PMLR %P 3924--3942 %U https://proceedings.mlr.press/v270/jain25a.html %V 270 %X We propose Audio Noise Awareness using Visuals of Indoors for NAVIgation for quieter robot path planning. While humans are naturally aware of the noise they make and its impact on those around them, robots currently lack this awareness. A key challenge in achieving audio awareness for robots is estimating how loud will the robot’s actions be at a listener’s location? Since sound depends upon the geometry and material composition of rooms, we train the robot to passively perceive loudness using visual observations of indoor environments. To this end, we generate data on how loud an ‘impulse’ sounds at different listener locations in simulated homes, and train our Acoustic Noise Predictor (ANP). Next, we collect acoustic profiles corresponding to different actions for navigation. Unifying ANP with action acoustics, we demonstrate experiments with wheeled (Hello Robot Stretch) and legged (Unitree Go2) robots so that these robots adhere to the noise constraints of the environment. All simulated and real-world data, code and model checkpoints is released at https://anavi-corl24.github.io/.
APA
Jain, V., Veerapaneni, R. & Bisk, Y.. (2025). ANAVI: Audio Noise Awareness using Visual of Indoor environments for NAVIgation. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3924-3942 Available from https://proceedings.mlr.press/v270/jain25a.html.

Related Material