Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes

Sean Segal, Nishanth Kumar, Sergio Casas, Wenyuan Zeng, Mengye Ren, Jingkang Wang, Raquel Urtasun
Proceedings of the 5th Conference on Robot Learning, PMLR 164:816-826, 2022.

Abstract

Self-driving vehicles must perceive and predict the future positions of nearby actors to avoid collisions and drive safely. A deep learning module is often responsible for this task, requiring large-scale, high-quality training datasets. Due to high labeling costs, active learning approaches are an appealing solution to maximizing model performance for a given labeling budget. However, despite its appeal, there has been little scientific analysis of active learning approaches for the perception and prediction (P&P) problem. In this work, we study active learning techniques for P&P and find that the traditional active learning formulation is ill-suited. We thus introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes. Extensive experiments on a real-world dataset suggest significant improvements across perception, prediction, and downstream planning tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-segal22a, title = {Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes}, author = {Segal, Sean and Kumar, Nishanth and Casas, Sergio and Zeng, Wenyuan and Ren, Mengye and Wang, Jingkang and Urtasun, Raquel}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {816--826}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/segal22a/segal22a.pdf}, url = {https://proceedings.mlr.press/v164/segal22a.html}, abstract = {Self-driving vehicles must perceive and predict the future positions of nearby actors to avoid collisions and drive safely. A deep learning module is often responsible for this task, requiring large-scale, high-quality training datasets. Due to high labeling costs, active learning approaches are an appealing solution to maximizing model performance for a given labeling budget. However, despite its appeal, there has been little scientific analysis of active learning approaches for the perception and prediction (P&P) problem. In this work, we study active learning techniques for P&P and find that the traditional active learning formulation is ill-suited. We thus introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes. Extensive experiments on a real-world dataset suggest significant improvements across perception, prediction, and downstream planning tasks. } }
Endnote
%0 Conference Paper %T Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes %A Sean Segal %A Nishanth Kumar %A Sergio Casas %A Wenyuan Zeng %A Mengye Ren %A Jingkang Wang %A Raquel Urtasun %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-segal22a %I PMLR %P 816--826 %U https://proceedings.mlr.press/v164/segal22a.html %V 164 %X Self-driving vehicles must perceive and predict the future positions of nearby actors to avoid collisions and drive safely. A deep learning module is often responsible for this task, requiring large-scale, high-quality training datasets. Due to high labeling costs, active learning approaches are an appealing solution to maximizing model performance for a given labeling budget. However, despite its appeal, there has been little scientific analysis of active learning approaches for the perception and prediction (P&P) problem. In this work, we study active learning techniques for P&P and find that the traditional active learning formulation is ill-suited. We thus introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes. Extensive experiments on a real-world dataset suggest significant improvements across perception, prediction, and downstream planning tasks.
APA
Segal, S., Kumar, N., Casas, S., Zeng, W., Ren, M., Wang, J. & Urtasun, R.. (2022). Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:816-826 Available from https://proceedings.mlr.press/v164/segal22a.html.

Related Material