SCONE: A Food Scooping Robot Learning Framework with Active Perception

Yen-Ling Tai, Yu Chien Chiu, Yu-Wei Chao, Yi-Ting Chen
Proceedings of The 7th Conference on Robot Learning, PMLR 229:849-865, 2023.

Abstract

Effectively scooping food items poses a substantial challenge for current robotic systems, due to the intricate states and diverse physical properties of food. To address this challenge, we believe in the importance of encoding food items into meaningful representations for effective food scooping. However, the distinctive properties of food items, including deformability, fragility, fluidity, or granularity, pose significant challenges for existing representations. In this paper, we investigate the potential of active perception for learning meaningful food representations in an implicit manner. To this end, we present SCONE, a food-scooping robot learning framework that leverages representations gained from active perception to facilitate food scooping policy learning. SCONE comprises two crucial encoding components: the interactive encoder and the state retrieval module. Through the encoding process, SCONE is capable of capturing properties of food items and vital state characteristics. In our real-world scooping experiments, SCONE excels with a $71%$ success rate when tasked with 6 previously unseen food items across three different difficulty levels, surpassing state-of-theart methods. This enhanced performance underscores SCONE’s stability, as all food items consistently achieve task success rates exceeding $50%$. Additionally, SCONE’s impressive capacity to accommodate diverse initial states enables it to precisely evaluate the present condition of the food, resulting in a compelling scooping success rate. For further information, please visit our website: https://sites.google.com/view/corlscone/home.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-tai23a, title = {SCONE: A Food Scooping Robot Learning Framework with Active Perception}, author = {Tai, Yen-Ling and Chiu, Yu Chien and Chao, Yu-Wei and Chen, Yi-Ting}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {849--865}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/tai23a/tai23a.pdf}, url = {https://proceedings.mlr.press/v229/tai23a.html}, abstract = {Effectively scooping food items poses a substantial challenge for current robotic systems, due to the intricate states and diverse physical properties of food. To address this challenge, we believe in the importance of encoding food items into meaningful representations for effective food scooping. However, the distinctive properties of food items, including deformability, fragility, fluidity, or granularity, pose significant challenges for existing representations. In this paper, we investigate the potential of active perception for learning meaningful food representations in an implicit manner. To this end, we present SCONE, a food-scooping robot learning framework that leverages representations gained from active perception to facilitate food scooping policy learning. SCONE comprises two crucial encoding components: the interactive encoder and the state retrieval module. Through the encoding process, SCONE is capable of capturing properties of food items and vital state characteristics. In our real-world scooping experiments, SCONE excels with a $71%$ success rate when tasked with 6 previously unseen food items across three different difficulty levels, surpassing state-of-theart methods. This enhanced performance underscores SCONE’s stability, as all food items consistently achieve task success rates exceeding $50%$. Additionally, SCONE’s impressive capacity to accommodate diverse initial states enables it to precisely evaluate the present condition of the food, resulting in a compelling scooping success rate. For further information, please visit our website: https://sites.google.com/view/corlscone/home.} }
Endnote
%0 Conference Paper %T SCONE: A Food Scooping Robot Learning Framework with Active Perception %A Yen-Ling Tai %A Yu Chien Chiu %A Yu-Wei Chao %A Yi-Ting Chen %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-tai23a %I PMLR %P 849--865 %U https://proceedings.mlr.press/v229/tai23a.html %V 229 %X Effectively scooping food items poses a substantial challenge for current robotic systems, due to the intricate states and diverse physical properties of food. To address this challenge, we believe in the importance of encoding food items into meaningful representations for effective food scooping. However, the distinctive properties of food items, including deformability, fragility, fluidity, or granularity, pose significant challenges for existing representations. In this paper, we investigate the potential of active perception for learning meaningful food representations in an implicit manner. To this end, we present SCONE, a food-scooping robot learning framework that leverages representations gained from active perception to facilitate food scooping policy learning. SCONE comprises two crucial encoding components: the interactive encoder and the state retrieval module. Through the encoding process, SCONE is capable of capturing properties of food items and vital state characteristics. In our real-world scooping experiments, SCONE excels with a $71%$ success rate when tasked with 6 previously unseen food items across three different difficulty levels, surpassing state-of-theart methods. This enhanced performance underscores SCONE’s stability, as all food items consistently achieve task success rates exceeding $50%$. Additionally, SCONE’s impressive capacity to accommodate diverse initial states enables it to precisely evaluate the present condition of the food, resulting in a compelling scooping success rate. For further information, please visit our website: https://sites.google.com/view/corlscone/home.
APA
Tai, Y., Chiu, Y.C., Chao, Y. & Chen, Y.. (2023). SCONE: A Food Scooping Robot Learning Framework with Active Perception. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:849-865 Available from https://proceedings.mlr.press/v229/tai23a.html.

Related Material