Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception

Yiming Li, Juexiao Zhang, Dekun Ma, Yue Wang, Chen Feng
Proceedings of The 6th Conference on Robot Learning, PMLR 205:2062-2072, 2023.

Abstract

Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method’s effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-li23e, title = {Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception}, author = {Li, Yiming and Zhang, Juexiao and Ma, Dekun and Wang, Yue and Feng, Chen}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {2062--2072}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/li23e/li23e.pdf}, url = {https://proceedings.mlr.press/v205/li23e.html}, abstract = {Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method’s effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.} }
Endnote
%0 Conference Paper %T Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception %A Yiming Li %A Juexiao Zhang %A Dekun Ma %A Yue Wang %A Chen Feng %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-li23e %I PMLR %P 2062--2072 %U https://proceedings.mlr.press/v205/li23e.html %V 205 %X Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method’s effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.
APA
Li, Y., Zhang, J., Ma, D., Wang, Y. & Feng, C.. (2023). Multi-Robot Scene Completion: Towards Task-Agnostic Collaborative Perception. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:2062-2072 Available from https://proceedings.mlr.press/v205/li23e.html.

Related Material