SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning

Dongseok Shim, Seungjae Lee, H. Jin Kim
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:31489-31503, 2023.

Abstract

As previous representations for reinforcement learning cannot effectively incorporate a human-intuitive understanding of the 3D environment, they usually suffer from sub-optimal performances. In this paper, we present Semantic-aware Neural Radiance Fields for Reinforcement Learning (SNeRL), which jointly optimizes semantic-aware neural radiance fields (NeRF) with a convolutional encoder to learn 3D-aware neural implicit representation from multi-view images. We introduce 3D semantic and distilled feature fields in parallel to the RGB radiance fields in NeRF to learn semantic and object-centric representation for reinforcement learning. SNeRL outperforms not only previous pixel-based representations but also recent 3D-aware representations both in model-free and model-based reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-shim23a, title = {{SN}e{RL}: Semantic-aware Neural Radiance Fields for Reinforcement Learning}, author = {Shim, Dongseok and Lee, Seungjae and Kim, H. Jin}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {31489--31503}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/shim23a/shim23a.pdf}, url = {https://proceedings.mlr.press/v202/shim23a.html}, abstract = {As previous representations for reinforcement learning cannot effectively incorporate a human-intuitive understanding of the 3D environment, they usually suffer from sub-optimal performances. In this paper, we present Semantic-aware Neural Radiance Fields for Reinforcement Learning (SNeRL), which jointly optimizes semantic-aware neural radiance fields (NeRF) with a convolutional encoder to learn 3D-aware neural implicit representation from multi-view images. We introduce 3D semantic and distilled feature fields in parallel to the RGB radiance fields in NeRF to learn semantic and object-centric representation for reinforcement learning. SNeRL outperforms not only previous pixel-based representations but also recent 3D-aware representations both in model-free and model-based reinforcement learning.} }
Endnote
%0 Conference Paper %T SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning %A Dongseok Shim %A Seungjae Lee %A H. Jin Kim %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-shim23a %I PMLR %P 31489--31503 %U https://proceedings.mlr.press/v202/shim23a.html %V 202 %X As previous representations for reinforcement learning cannot effectively incorporate a human-intuitive understanding of the 3D environment, they usually suffer from sub-optimal performances. In this paper, we present Semantic-aware Neural Radiance Fields for Reinforcement Learning (SNeRL), which jointly optimizes semantic-aware neural radiance fields (NeRF) with a convolutional encoder to learn 3D-aware neural implicit representation from multi-view images. We introduce 3D semantic and distilled feature fields in parallel to the RGB radiance fields in NeRF to learn semantic and object-centric representation for reinforcement learning. SNeRL outperforms not only previous pixel-based representations but also recent 3D-aware representations both in model-free and model-based reinforcement learning.
APA
Shim, D., Lee, S. & Kim, H.J.. (2023). SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:31489-31503 Available from https://proceedings.mlr.press/v202/shim23a.html.

Related Material