[edit]
Anytime Depth Estimation with Limited Sensing and Computation Capabilities on Mobile Devices
Proceedings of the 5th Conference on Robot Learning, PMLR 164:609-618, 2022.
Abstract
Depth estimation is a safety critical and energy sensitive method for environment sensing. However, in real applications, the depth estimation may be halted at any time, due to the random interruptions or low energy capacity of battery when using powerful sensors like 3D LiDAR. To address this problem, we propose a depth estimation method that is robust to random halts and relies on energy-saving 2D LiDAR and a monocular camera. To this end, we formulate the depth estimation as an anytime problem and propose a new metric to evaluate its robustness under random interruptions. Our final model has only 2M parameters with a marginal accuracy loss compared to state-of-the-art baselines. Indeed, our experiments on NYU Depth v2 dataset show that our model is capable of processing 224$\times$224 resolution images and 2D point clouds with any computation budget larger than 6.37ms (157 FPS) and 0.2J on an NVIDIA Jetson TX2 system. Evaluations on KITTI dataset under supervised and self-supervised training show similar results.