Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents

Jinyeon Kim, Cheolhong Min, Byeonghwi Kim, Jonghyun Choi
Proceedings of The 8th Conference on Robot Learning, PMLR 270:2396-2428, 2025.

Abstract

When we, humans, perform a task, we consider changes in environments such as objects’ arrangement due to interactions with objects and other reasons; e.g., when we find a mug to clean, if it is already clean, we skip cleaning it. But even the state-of-the-art embodied agents often ignore changed environments when performing a task, leading to failure to complete the task, executing unnecessary actions, or fixing the mistake after it was made. Here, we propose Pre-emptive Action Revision by Environmental feeDback (PRED) that allows an embodied agent to revise their action in response to the perceived environmental status before it makes mistakes. We empirically validate PRED and observe that it outperforms the prior art on two challenging benchmarks in the virtual environment, TEACh and ALFRED, by noticeable margins in most metrics, including unseen success rates, with shorter execution time, implying an efficiently behaved agent. Furthermore, we demonstrate the effectiveness of the proposed method with real robot experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-kim25b, title = {Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents}, author = {Kim, Jinyeon and Min, Cheolhong and Kim, Byeonghwi and Choi, Jonghyun}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {2396--2428}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/kim25b/kim25b.pdf}, url = {https://proceedings.mlr.press/v270/kim25b.html}, abstract = {When we, humans, perform a task, we consider changes in environments such as objects’ arrangement due to interactions with objects and other reasons; e.g., when we find a mug to clean, if it is already clean, we skip cleaning it. But even the state-of-the-art embodied agents often ignore changed environments when performing a task, leading to failure to complete the task, executing unnecessary actions, or fixing the mistake after it was made. Here, we propose Pre-emptive Action Revision by Environmental feeDback (PRED) that allows an embodied agent to revise their action in response to the perceived environmental status before it makes mistakes. We empirically validate PRED and observe that it outperforms the prior art on two challenging benchmarks in the virtual environment, TEACh and ALFRED, by noticeable margins in most metrics, including unseen success rates, with shorter execution time, implying an efficiently behaved agent. Furthermore, we demonstrate the effectiveness of the proposed method with real robot experiments.} }
Endnote
%0 Conference Paper %T Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents %A Jinyeon Kim %A Cheolhong Min %A Byeonghwi Kim %A Jonghyun Choi %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-kim25b %I PMLR %P 2396--2428 %U https://proceedings.mlr.press/v270/kim25b.html %V 270 %X When we, humans, perform a task, we consider changes in environments such as objects’ arrangement due to interactions with objects and other reasons; e.g., when we find a mug to clean, if it is already clean, we skip cleaning it. But even the state-of-the-art embodied agents often ignore changed environments when performing a task, leading to failure to complete the task, executing unnecessary actions, or fixing the mistake after it was made. Here, we propose Pre-emptive Action Revision by Environmental feeDback (PRED) that allows an embodied agent to revise their action in response to the perceived environmental status before it makes mistakes. We empirically validate PRED and observe that it outperforms the prior art on two challenging benchmarks in the virtual environment, TEACh and ALFRED, by noticeable margins in most metrics, including unseen success rates, with shorter execution time, implying an efficiently behaved agent. Furthermore, we demonstrate the effectiveness of the proposed method with real robot experiments.
APA
Kim, J., Min, C., Kim, B. & Choi, J.. (2025). Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:2396-2428 Available from https://proceedings.mlr.press/v270/kim25b.html.

Related Material