TRITON: Neural Neural Textures for Better Sim2Real

Ryan D. Burgert, Jinghuan Shang, Xiang Li, Michael S. Ryoo
Proceedings of The 6th Conference on Robot Learning, PMLR 205:2215-2225, 2023.

Abstract

Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results. We present a new approach that combines differentiable rendering with image translation to achieve temporal consistency over indefinite timescales, using surface consistency losses and neu- ral neural textures. We call this algorithm TRITON (Texture Recovering Image Translation Network): an unsupervised, end-to-end, stateless sim2real algorithm that leverages the underlying 3D geometry of input scenes by generating realistic- looking learnable neural textures. By settling on a particular texture for the objects in a scene, we ensure consistency between frames statelessly. TRITON is not lim- ited to camera movements — it can handle the movement and deformation of ob- jects as well, making it useful for downstream tasks such as robotic manipulation. We demonstrate the superiority of our approach both qualitatively and quantita- tively, using robotic experiments and comparisons to ground truth photographs. We show that TRITON generates more useful images than other algorithms do. Please see our project website: tritonpaper.github.io

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-burgert23a, title = {TRITON: Neural Neural Textures for Better Sim2Real}, author = {Burgert, Ryan D. and Shang, Jinghuan and Li, Xiang and Ryoo, Michael S.}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {2215--2225}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/burgert23a/burgert23a.pdf}, url = {https://proceedings.mlr.press/v205/burgert23a.html}, abstract = {Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results. We present a new approach that combines differentiable rendering with image translation to achieve temporal consistency over indefinite timescales, using surface consistency losses and neu- ral neural textures. We call this algorithm TRITON (Texture Recovering Image Translation Network): an unsupervised, end-to-end, stateless sim2real algorithm that leverages the underlying 3D geometry of input scenes by generating realistic- looking learnable neural textures. By settling on a particular texture for the objects in a scene, we ensure consistency between frames statelessly. TRITON is not lim- ited to camera movements — it can handle the movement and deformation of ob- jects as well, making it useful for downstream tasks such as robotic manipulation. We demonstrate the superiority of our approach both qualitatively and quantita- tively, using robotic experiments and comparisons to ground truth photographs. We show that TRITON generates more useful images than other algorithms do. Please see our project website: tritonpaper.github.io} }
Endnote
%0 Conference Paper %T TRITON: Neural Neural Textures for Better Sim2Real %A Ryan D. Burgert %A Jinghuan Shang %A Xiang Li %A Michael S. Ryoo %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-burgert23a %I PMLR %P 2215--2225 %U https://proceedings.mlr.press/v205/burgert23a.html %V 205 %X Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results. We present a new approach that combines differentiable rendering with image translation to achieve temporal consistency over indefinite timescales, using surface consistency losses and neu- ral neural textures. We call this algorithm TRITON (Texture Recovering Image Translation Network): an unsupervised, end-to-end, stateless sim2real algorithm that leverages the underlying 3D geometry of input scenes by generating realistic- looking learnable neural textures. By settling on a particular texture for the objects in a scene, we ensure consistency between frames statelessly. TRITON is not lim- ited to camera movements — it can handle the movement and deformation of ob- jects as well, making it useful for downstream tasks such as robotic manipulation. We demonstrate the superiority of our approach both qualitatively and quantita- tively, using robotic experiments and comparisons to ground truth photographs. We show that TRITON generates more useful images than other algorithms do. Please see our project website: tritonpaper.github.io
APA
Burgert, R.D., Shang, J., Li, X. & Ryoo, M.S.. (2023). TRITON: Neural Neural Textures for Better Sim2Real. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:2215-2225 Available from https://proceedings.mlr.press/v205/burgert23a.html.

Related Material