LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation

Florent Bartoccioni, Eloi Zablocki, Andrei Bursuc, Patrick Perez, Matthieu Cord, Karteek Alahari
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1663-1672, 2023.

Abstract

Recent works in autonomous driving have widely adopted the bird’seye-view (BEV) semantic map as an intermediate representation of the world. Online prediction of these BEV maps involves non-trivial operations such as multi-camera data extraction as well as fusion and projection into a common topview grid. This is usually done with error-prone geometric operations (e.g., homography or back-projection from monocular depth estimation) or expensive direct dense mapping between image pixels and pixels in BEV (e.g., with MLP or attention). In this work, we present ‘LaRa’, an efficient encoder-decoder, transformer-based model for vehicle semantic segmentation from multiple cameras. Our approach uses a system of cross-attention to aggregate information over multiple sensors into a compact, yet rich, collection of latent representations. These latent representations, after being processed by a series of selfattention blocks, are then reprojected with a second cross-attention in the BEV space. We demonstrate that our model outperforms the best previous works using transformers on nuScenes. The code and trained models are available at https://github.com/valeoai/LaRa.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-bartoccioni23a, title = {LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation}, author = {Bartoccioni, Florent and Zablocki, Eloi and Bursuc, Andrei and Perez, Patrick and Cord, Matthieu and Alahari, Karteek}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1663--1672}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/bartoccioni23a/bartoccioni23a.pdf}, url = {https://proceedings.mlr.press/v205/bartoccioni23a.html}, abstract = {Recent works in autonomous driving have widely adopted the bird’seye-view (BEV) semantic map as an intermediate representation of the world. Online prediction of these BEV maps involves non-trivial operations such as multi-camera data extraction as well as fusion and projection into a common topview grid. This is usually done with error-prone geometric operations (e.g., homography or back-projection from monocular depth estimation) or expensive direct dense mapping between image pixels and pixels in BEV (e.g., with MLP or attention). In this work, we present ‘LaRa’, an efficient encoder-decoder, transformer-based model for vehicle semantic segmentation from multiple cameras. Our approach uses a system of cross-attention to aggregate information over multiple sensors into a compact, yet rich, collection of latent representations. These latent representations, after being processed by a series of selfattention blocks, are then reprojected with a second cross-attention in the BEV space. We demonstrate that our model outperforms the best previous works using transformers on nuScenes. The code and trained models are available at https://github.com/valeoai/LaRa.} }
Endnote
%0 Conference Paper %T LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation %A Florent Bartoccioni %A Eloi Zablocki %A Andrei Bursuc %A Patrick Perez %A Matthieu Cord %A Karteek Alahari %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-bartoccioni23a %I PMLR %P 1663--1672 %U https://proceedings.mlr.press/v205/bartoccioni23a.html %V 205 %X Recent works in autonomous driving have widely adopted the bird’seye-view (BEV) semantic map as an intermediate representation of the world. Online prediction of these BEV maps involves non-trivial operations such as multi-camera data extraction as well as fusion and projection into a common topview grid. This is usually done with error-prone geometric operations (e.g., homography or back-projection from monocular depth estimation) or expensive direct dense mapping between image pixels and pixels in BEV (e.g., with MLP or attention). In this work, we present ‘LaRa’, an efficient encoder-decoder, transformer-based model for vehicle semantic segmentation from multiple cameras. Our approach uses a system of cross-attention to aggregate information over multiple sensors into a compact, yet rich, collection of latent representations. These latent representations, after being processed by a series of selfattention blocks, are then reprojected with a second cross-attention in the BEV space. We demonstrate that our model outperforms the best previous works using transformers on nuScenes. The code and trained models are available at https://github.com/valeoai/LaRa.
APA
Bartoccioni, F., Zablocki, E., Bursuc, A., Perez, P., Cord, M. & Alahari, K.. (2023). LaRa: Latents and Rays for Multi-Camera Bird’s-Eye-View Semantic Segmentation. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1663-1672 Available from https://proceedings.mlr.press/v205/bartoccioni23a.html.

Related Material