Extracting Visual Plans from Unlabeled Videos via Symbolic Guidance

Wenyan Yang, Ahmet Tikna, Yi Zhao, Yuying Zhang, Luigi Palopoli, Marco Roveri, Joni Pajarinen
Proceedings of The 9th Conference on Robot Learning, PMLR 305:3995-4018, 2025.

Abstract

Visual planning, by offering a sequence of intermediate visual subgoals to a goal-conditioned low-level policy, achieves promising performance on long-horizon manipulation tasks. To obtain the subgoals, existing methods typically resort to video generation models but suffer from model hallucination and computational cost. We present Vis2Plan, an efficient, explainable and white-box visual planning framework powered by symbolic guidance. From raw, unlabeled play data, Vis2Plan harnesses vision foundation models to automatically extract a compact set of task symbols, which allows building a high-level symbolic transition graph for multi-goal, multi-stage planning. At test time, given a desired task goal, our planner conducts planning at the symbolic level and assembles a sequence of physically consistent intermediate sub-goal images grounded by the underlying symbolic representation. Our Vis2Plan outperforms strong diffusion video generation-based visual planners by delivering 53% higher aggregate success rate while generating visual plans 35$\times$ faster. The results indicate that Vis2Plan is able to generate physically consistent image goals while offering fully inspectable reasoning steps.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-yang25c, title = {Extracting Visual Plans from Unlabeled Videos via Symbolic Guidance}, author = {Yang, Wenyan and Tikna, Ahmet and Zhao, Yi and Zhang, Yuying and Palopoli, Luigi and Roveri, Marco and Pajarinen, Joni}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {3995--4018}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/yang25c/yang25c.pdf}, url = {https://proceedings.mlr.press/v305/yang25c.html}, abstract = {Visual planning, by offering a sequence of intermediate visual subgoals to a goal-conditioned low-level policy, achieves promising performance on long-horizon manipulation tasks. To obtain the subgoals, existing methods typically resort to video generation models but suffer from model hallucination and computational cost. We present Vis2Plan, an efficient, explainable and white-box visual planning framework powered by symbolic guidance. From raw, unlabeled play data, Vis2Plan harnesses vision foundation models to automatically extract a compact set of task symbols, which allows building a high-level symbolic transition graph for multi-goal, multi-stage planning. At test time, given a desired task goal, our planner conducts planning at the symbolic level and assembles a sequence of physically consistent intermediate sub-goal images grounded by the underlying symbolic representation. Our Vis2Plan outperforms strong diffusion video generation-based visual planners by delivering 53% higher aggregate success rate while generating visual plans 35$\times$ faster. The results indicate that Vis2Plan is able to generate physically consistent image goals while offering fully inspectable reasoning steps.} }
Endnote
%0 Conference Paper %T Extracting Visual Plans from Unlabeled Videos via Symbolic Guidance %A Wenyan Yang %A Ahmet Tikna %A Yi Zhao %A Yuying Zhang %A Luigi Palopoli %A Marco Roveri %A Joni Pajarinen %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-yang25c %I PMLR %P 3995--4018 %U https://proceedings.mlr.press/v305/yang25c.html %V 305 %X Visual planning, by offering a sequence of intermediate visual subgoals to a goal-conditioned low-level policy, achieves promising performance on long-horizon manipulation tasks. To obtain the subgoals, existing methods typically resort to video generation models but suffer from model hallucination and computational cost. We present Vis2Plan, an efficient, explainable and white-box visual planning framework powered by symbolic guidance. From raw, unlabeled play data, Vis2Plan harnesses vision foundation models to automatically extract a compact set of task symbols, which allows building a high-level symbolic transition graph for multi-goal, multi-stage planning. At test time, given a desired task goal, our planner conducts planning at the symbolic level and assembles a sequence of physically consistent intermediate sub-goal images grounded by the underlying symbolic representation. Our Vis2Plan outperforms strong diffusion video generation-based visual planners by delivering 53% higher aggregate success rate while generating visual plans 35$\times$ faster. The results indicate that Vis2Plan is able to generate physically consistent image goals while offering fully inspectable reasoning steps.
APA
Yang, W., Tikna, A., Zhao, Y., Zhang, Y., Palopoli, L., Roveri, M. & Pajarinen, J.. (2025). Extracting Visual Plans from Unlabeled Videos via Symbolic Guidance. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:3995-4018 Available from https://proceedings.mlr.press/v305/yang25c.html.

Related Material