Towards General Single-Utensil Food Acquisition with Human-Informed Actions

Ethan Kroll Gordon, Amal Nanavati, Ramya Challa, Bernie Hao Zhu, Taylor Annette Kessler Faulkner, Siddhartha Srinivasa
Proceedings of The 7th Conference on Robot Learning, PMLR 229:2414-2428, 2023.

Abstract

Food acquisition with common general-purpose utensils is a necessary component of robot applications like in-home assistive feeding. Learning acquisition policies in this space is difficult in part because any model will need to contend with extensive state and actions spaces. Food is extremely diverse and generally difficult to simulate, and acquisition actions like skewers, scoops, wiggles, and twirls can be parameterized in myriad ways. However, food’s visual diversity can belie a degree of physical homogeneity, and many foods allow flexibility in how they are acquired. Due to these facts, our key insight is that a small subset of actions is sufficient to acquire a wide variety of food items. In this work, we present a methodology for identifying such a subset from limited human trajectory data. We first develop an over-parameterized action space of robot acquisition trajectories that capture the variety of human food acquisition technique. By mapping human trajectories into this space and clustering, we construct a discrete set of 11 actions. We demonstrate that this set is capable of acquiring a variety of food items with $\geq80%$ success rate, a rate that users have said is sufficient for in-home robot-assisted feeding. Furthermore, since this set is so small, we also show that we can use online learning to determine a sufficiently optimal action for a previously-unseen food item over the course of a single meal.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-gordon23a, title = {Towards General Single-Utensil Food Acquisition with Human-Informed Actions}, author = {Gordon, Ethan Kroll and Nanavati, Amal and Challa, Ramya and Zhu, Bernie Hao and Faulkner, Taylor Annette Kessler and Srinivasa, Siddhartha}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {2414--2428}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/gordon23a/gordon23a.pdf}, url = {https://proceedings.mlr.press/v229/gordon23a.html}, abstract = {Food acquisition with common general-purpose utensils is a necessary component of robot applications like in-home assistive feeding. Learning acquisition policies in this space is difficult in part because any model will need to contend with extensive state and actions spaces. Food is extremely diverse and generally difficult to simulate, and acquisition actions like skewers, scoops, wiggles, and twirls can be parameterized in myriad ways. However, food’s visual diversity can belie a degree of physical homogeneity, and many foods allow flexibility in how they are acquired. Due to these facts, our key insight is that a small subset of actions is sufficient to acquire a wide variety of food items. In this work, we present a methodology for identifying such a subset from limited human trajectory data. We first develop an over-parameterized action space of robot acquisition trajectories that capture the variety of human food acquisition technique. By mapping human trajectories into this space and clustering, we construct a discrete set of 11 actions. We demonstrate that this set is capable of acquiring a variety of food items with $\geq80%$ success rate, a rate that users have said is sufficient for in-home robot-assisted feeding. Furthermore, since this set is so small, we also show that we can use online learning to determine a sufficiently optimal action for a previously-unseen food item over the course of a single meal.} }
Endnote
%0 Conference Paper %T Towards General Single-Utensil Food Acquisition with Human-Informed Actions %A Ethan Kroll Gordon %A Amal Nanavati %A Ramya Challa %A Bernie Hao Zhu %A Taylor Annette Kessler Faulkner %A Siddhartha Srinivasa %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-gordon23a %I PMLR %P 2414--2428 %U https://proceedings.mlr.press/v229/gordon23a.html %V 229 %X Food acquisition with common general-purpose utensils is a necessary component of robot applications like in-home assistive feeding. Learning acquisition policies in this space is difficult in part because any model will need to contend with extensive state and actions spaces. Food is extremely diverse and generally difficult to simulate, and acquisition actions like skewers, scoops, wiggles, and twirls can be parameterized in myriad ways. However, food’s visual diversity can belie a degree of physical homogeneity, and many foods allow flexibility in how they are acquired. Due to these facts, our key insight is that a small subset of actions is sufficient to acquire a wide variety of food items. In this work, we present a methodology for identifying such a subset from limited human trajectory data. We first develop an over-parameterized action space of robot acquisition trajectories that capture the variety of human food acquisition technique. By mapping human trajectories into this space and clustering, we construct a discrete set of 11 actions. We demonstrate that this set is capable of acquiring a variety of food items with $\geq80%$ success rate, a rate that users have said is sufficient for in-home robot-assisted feeding. Furthermore, since this set is so small, we also show that we can use online learning to determine a sufficiently optimal action for a previously-unseen food item over the course of a single meal.
APA
Gordon, E.K., Nanavati, A., Challa, R., Zhu, B.H., Faulkner, T.A.K. & Srinivasa, S.. (2023). Towards General Single-Utensil Food Acquisition with Human-Informed Actions. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:2414-2428 Available from https://proceedings.mlr.press/v229/gordon23a.html.

Related Material