Fleet Active Learning: A Submodular Maximization Approach

Oguzhan Akcin, Orhan Unuvar, Onat Ure, Sandeep P. Chinchali
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1378-1399, 2023.

Abstract

In multi-robot systems, robots often gather data to improve the performance of their deep neural networks (DNNs) for perception and planning. Ideally, these robots should select the most informative samples from their local data distributions by employing active learning approaches. However, when the data collection is distributed among multiple robots, redundancy becomes an issue as different robots may select similar data points. To overcome this challenge, we propose a fleet active learning (FAL) framework in which robots collectively select informative data samples to enhance their DNN models. Our framework leverages submodular maximization techniques to prioritize the selection of samples with high information gain. Through an iterative algorithm, the robots coordinate their efforts to collectively select the most valuable samples while minimizing communication between robots. We provide a theoretical analysis of the performance of our proposed framework and show that it is able to approximate the NP-hard optimal solution. We demonstrate the effectiveness of our framework through experiments on real-world perception and classification datasets, which include autonomous driving datasets such as Berkeley DeepDrive. Our results show an improvement by up to $25.0 %$ in classification accuracy, $9.2 %$ in mean average precision and $48.5 %$ in the submodular objective value compared to a completely distributed baseline.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-akcin23a, title = {Fleet Active Learning: A Submodular Maximization Approach}, author = {Akcin, Oguzhan and Unuvar, Orhan and Ure, Onat and Chinchali, Sandeep P.}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1378--1399}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/akcin23a/akcin23a.pdf}, url = {https://proceedings.mlr.press/v229/akcin23a.html}, abstract = {In multi-robot systems, robots often gather data to improve the performance of their deep neural networks (DNNs) for perception and planning. Ideally, these robots should select the most informative samples from their local data distributions by employing active learning approaches. However, when the data collection is distributed among multiple robots, redundancy becomes an issue as different robots may select similar data points. To overcome this challenge, we propose a fleet active learning (FAL) framework in which robots collectively select informative data samples to enhance their DNN models. Our framework leverages submodular maximization techniques to prioritize the selection of samples with high information gain. Through an iterative algorithm, the robots coordinate their efforts to collectively select the most valuable samples while minimizing communication between robots. We provide a theoretical analysis of the performance of our proposed framework and show that it is able to approximate the NP-hard optimal solution. We demonstrate the effectiveness of our framework through experiments on real-world perception and classification datasets, which include autonomous driving datasets such as Berkeley DeepDrive. Our results show an improvement by up to $25.0 %$ in classification accuracy, $9.2 %$ in mean average precision and $48.5 %$ in the submodular objective value compared to a completely distributed baseline.} }
Endnote
%0 Conference Paper %T Fleet Active Learning: A Submodular Maximization Approach %A Oguzhan Akcin %A Orhan Unuvar %A Onat Ure %A Sandeep P. Chinchali %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-akcin23a %I PMLR %P 1378--1399 %U https://proceedings.mlr.press/v229/akcin23a.html %V 229 %X In multi-robot systems, robots often gather data to improve the performance of their deep neural networks (DNNs) for perception and planning. Ideally, these robots should select the most informative samples from their local data distributions by employing active learning approaches. However, when the data collection is distributed among multiple robots, redundancy becomes an issue as different robots may select similar data points. To overcome this challenge, we propose a fleet active learning (FAL) framework in which robots collectively select informative data samples to enhance their DNN models. Our framework leverages submodular maximization techniques to prioritize the selection of samples with high information gain. Through an iterative algorithm, the robots coordinate their efforts to collectively select the most valuable samples while minimizing communication between robots. We provide a theoretical analysis of the performance of our proposed framework and show that it is able to approximate the NP-hard optimal solution. We demonstrate the effectiveness of our framework through experiments on real-world perception and classification datasets, which include autonomous driving datasets such as Berkeley DeepDrive. Our results show an improvement by up to $25.0 %$ in classification accuracy, $9.2 %$ in mean average precision and $48.5 %$ in the submodular objective value compared to a completely distributed baseline.
APA
Akcin, O., Unuvar, O., Ure, O. & Chinchali, S.P.. (2023). Fleet Active Learning: A Submodular Maximization Approach. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1378-1399 Available from https://proceedings.mlr.press/v229/akcin23a.html.

Related Material