MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses

Yang Fu, Ishan Misra, Xiaolong Wang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:10392-10404, 2023.

Abstract

We propose a generalizable neural radiance fields - MonoNeRF, that can be trained on large-scale monocular videos of moving in static scenes without any ground-truth annotations of depth and camera poses. MonoNeRF follows an Autoencoder-based architecture, where the encoder estimates the monocular depth and the camera pose, and the decoder constructs a Multiplane NeRF representation based on the depth encoder feature, and renders the input frames with the estimated camera. The learning is supervised by the reconstruction error. Once the model is learned, it can be applied to multiple applications including depth estimation, camera pose estimation, and single-image novel view synthesis. More qualitative results are available at: https://oasisyang.github.io/mononerf.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-fu23b, title = {{M}ono{N}e{RF}: Learning Generalizable {N}e{RF}s from Monocular Videos without Camera Poses}, author = {Fu, Yang and Misra, Ishan and Wang, Xiaolong}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {10392--10404}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/fu23b/fu23b.pdf}, url = {https://proceedings.mlr.press/v202/fu23b.html}, abstract = {We propose a generalizable neural radiance fields - MonoNeRF, that can be trained on large-scale monocular videos of moving in static scenes without any ground-truth annotations of depth and camera poses. MonoNeRF follows an Autoencoder-based architecture, where the encoder estimates the monocular depth and the camera pose, and the decoder constructs a Multiplane NeRF representation based on the depth encoder feature, and renders the input frames with the estimated camera. The learning is supervised by the reconstruction error. Once the model is learned, it can be applied to multiple applications including depth estimation, camera pose estimation, and single-image novel view synthesis. More qualitative results are available at: https://oasisyang.github.io/mononerf.} }
Endnote
%0 Conference Paper %T MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses %A Yang Fu %A Ishan Misra %A Xiaolong Wang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-fu23b %I PMLR %P 10392--10404 %U https://proceedings.mlr.press/v202/fu23b.html %V 202 %X We propose a generalizable neural radiance fields - MonoNeRF, that can be trained on large-scale monocular videos of moving in static scenes without any ground-truth annotations of depth and camera poses. MonoNeRF follows an Autoencoder-based architecture, where the encoder estimates the monocular depth and the camera pose, and the decoder constructs a Multiplane NeRF representation based on the depth encoder feature, and renders the input frames with the estimated camera. The learning is supervised by the reconstruction error. Once the model is learned, it can be applied to multiple applications including depth estimation, camera pose estimation, and single-image novel view synthesis. More qualitative results are available at: https://oasisyang.github.io/mononerf.
APA
Fu, Y., Misra, I. & Wang, X.. (2023). MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:10392-10404 Available from https://proceedings.mlr.press/v202/fu23b.html.

Related Material