CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation

Jingkang Wang, Sivabalan Manivasagam, Yun Chen, Ze Yang, Ioan Andrei Bârsan, Anqi Joyce Yang, Wei-Chiu Ma, Raquel Urtasun
Proceedings of The 6th Conference on Robot Learning, PMLR 205:630-642, 2023.

Abstract

Realistic simulation is key to enabling safe and scalable development of self-driving vehicles. A core component is simulating the sensors so that the entire autonomy system can be tested in simulation. Sensor simulation involves modeling traffic participants, such as vehicles, with high-quality appearance and articulated geometry, and rendering them in real-time. The self-driving industry has employed artists to build these assets. However, this is expensive, slow, and may not reflect reality. Instead, reconstructing assets automatically from sensor data collected in the wild would provide a better path to generating a diverse and large set that provides good real-world coverage. However, current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise. To tackle these issues, we present CADSim which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry, including articulated wheels, with high-quality appearance. Our experiments show our approach recovers more accurate shape from sparse data compared to existing approaches. Importantly, it also trains and renders efficiently. We demonstrate our reconstructed vehicles in a wide range of applications, including accurate testing of autonomy perception systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-wang23b, title = {CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation}, author = {Wang, Jingkang and Manivasagam, Sivabalan and Chen, Yun and Yang, Ze and B\^arsan, Ioan Andrei and Yang, Anqi Joyce and Ma, Wei-Chiu and Urtasun, Raquel}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {630--642}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/wang23b/wang23b.pdf}, url = {https://proceedings.mlr.press/v205/wang23b.html}, abstract = {Realistic simulation is key to enabling safe and scalable development of self-driving vehicles. A core component is simulating the sensors so that the entire autonomy system can be tested in simulation. Sensor simulation involves modeling traffic participants, such as vehicles, with high-quality appearance and articulated geometry, and rendering them in real-time. The self-driving industry has employed artists to build these assets. However, this is expensive, slow, and may not reflect reality. Instead, reconstructing assets automatically from sensor data collected in the wild would provide a better path to generating a diverse and large set that provides good real-world coverage. However, current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise. To tackle these issues, we present CADSim which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry, including articulated wheels, with high-quality appearance. Our experiments show our approach recovers more accurate shape from sparse data compared to existing approaches. Importantly, it also trains and renders efficiently. We demonstrate our reconstructed vehicles in a wide range of applications, including accurate testing of autonomy perception systems.} }
Endnote
%0 Conference Paper %T CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation %A Jingkang Wang %A Sivabalan Manivasagam %A Yun Chen %A Ze Yang %A Ioan Andrei Bârsan %A Anqi Joyce Yang %A Wei-Chiu Ma %A Raquel Urtasun %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-wang23b %I PMLR %P 630--642 %U https://proceedings.mlr.press/v205/wang23b.html %V 205 %X Realistic simulation is key to enabling safe and scalable development of self-driving vehicles. A core component is simulating the sensors so that the entire autonomy system can be tested in simulation. Sensor simulation involves modeling traffic participants, such as vehicles, with high-quality appearance and articulated geometry, and rendering them in real-time. The self-driving industry has employed artists to build these assets. However, this is expensive, slow, and may not reflect reality. Instead, reconstructing assets automatically from sensor data collected in the wild would provide a better path to generating a diverse and large set that provides good real-world coverage. However, current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise. To tackle these issues, we present CADSim which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry, including articulated wheels, with high-quality appearance. Our experiments show our approach recovers more accurate shape from sparse data compared to existing approaches. Importantly, it also trains and renders efficiently. We demonstrate our reconstructed vehicles in a wide range of applications, including accurate testing of autonomy perception systems.
APA
Wang, J., Manivasagam, S., Chen, Y., Yang, Z., Bârsan, I.A., Yang, A.J., Ma, W. & Urtasun, R.. (2023). CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:630-642 Available from https://proceedings.mlr.press/v205/wang23b.html.

Related Material