LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting

Chuanyu Pan, Aolin Xu
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3587-3599, 2025.

Abstract

Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning for various downstream tasks. Current dynamic 3D scene reconstruction approaches however heavily rely on data annotations to tackle the moving objects in the scene. In response, we present LiDARGrid, a 3D opacity grid representation instantly derived from LiDAR points, which captures the dense 3D scene and facilitates scene forecasting. Our method features a novel self-supervised neural volume densification procedure based on an autoencoder and differentiable volume rendering. Leveraging this representation, self-supervised scene forecasting can be performed. Our method is trained on NuScenes dataset for autonomous driving, and is evaluated by predicting future point clouds using the scene forecasting. It notably outperforms state-of-the-art methods in point cloud forecasting in all performance metrics. Beyond scene forecasting, our representation excels in supporting additional tasks such as moving region detection and depth completion, as shown by experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-pan25a, title = {LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting}, author = {Pan, Chuanyu and Xu, Aolin}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3587--3599}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/pan25a/pan25a.pdf}, url = {https://proceedings.mlr.press/v270/pan25a.html}, abstract = {Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning for various downstream tasks. Current dynamic 3D scene reconstruction approaches however heavily rely on data annotations to tackle the moving objects in the scene. In response, we present LiDARGrid, a 3D opacity grid representation instantly derived from LiDAR points, which captures the dense 3D scene and facilitates scene forecasting. Our method features a novel self-supervised neural volume densification procedure based on an autoencoder and differentiable volume rendering. Leveraging this representation, self-supervised scene forecasting can be performed. Our method is trained on NuScenes dataset for autonomous driving, and is evaluated by predicting future point clouds using the scene forecasting. It notably outperforms state-of-the-art methods in point cloud forecasting in all performance metrics. Beyond scene forecasting, our representation excels in supporting additional tasks such as moving region detection and depth completion, as shown by experiments.} }
Endnote
%0 Conference Paper %T LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting %A Chuanyu Pan %A Aolin Xu %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-pan25a %I PMLR %P 3587--3599 %U https://proceedings.mlr.press/v270/pan25a.html %V 270 %X Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning for various downstream tasks. Current dynamic 3D scene reconstruction approaches however heavily rely on data annotations to tackle the moving objects in the scene. In response, we present LiDARGrid, a 3D opacity grid representation instantly derived from LiDAR points, which captures the dense 3D scene and facilitates scene forecasting. Our method features a novel self-supervised neural volume densification procedure based on an autoencoder and differentiable volume rendering. Leveraging this representation, self-supervised scene forecasting can be performed. Our method is trained on NuScenes dataset for autonomous driving, and is evaluated by predicting future point clouds using the scene forecasting. It notably outperforms state-of-the-art methods in point cloud forecasting in all performance metrics. Beyond scene forecasting, our representation excels in supporting additional tasks such as moving region detection and depth completion, as shown by experiments.
APA
Pan, C. & Xu, A.. (2025). LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3587-3599 Available from https://proceedings.mlr.press/v270/pan25a.html.

Related Material