SOLD: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels

Malte Mosbach, Jan Niklas Ewertz, Angel Villar-Corrales, Sven Behnke
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:44911-44935, 2025.

Abstract

Learning a latent dynamics model provides a task-agnostic representation of an agent’s understanding of its environment. Leveraging this knowledge for model-based reinforcement learning (RL) holds the potential to improve sample efficiency over model-free methods by learning from imagined rollouts. Furthermore, because the latent space serves as input to behavior models, the informative representations learned by the world model facilitate efficient learning of desired skills. Most existing methods rely on holistic representations of the environment’s state. In contrast, humans reason about objects and their interactions, predicting how actions will affect specific parts of their surroundings. Inspired by this, we propose Slot-Attention for Object-centric Latent Dynamics (SOLD), a novel model-based RL algorithm that learns object-centric dynamics models in an unsupervised manner from pixel inputs. We demonstrate that the structured latent space not only improves model interpretability but also provides a valuable input space for behavior models to reason over. Our results show that SOLD outperforms DreamerV3 and TD-MPC2 - state-of-the-art model-based RL algorithms - across a range of multi-object manipulation environments that require both relational reasoning and dexterous control. Videos and code are available at https:// slot-latent-dynamics.github.io.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-mosbach25a, title = {{SOLD}: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels}, author = {Mosbach, Malte and Ewertz, Jan Niklas and Villar-Corrales, Angel and Behnke, Sven}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {44911--44935}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/mosbach25a/mosbach25a.pdf}, url = {https://proceedings.mlr.press/v267/mosbach25a.html}, abstract = {Learning a latent dynamics model provides a task-agnostic representation of an agent’s understanding of its environment. Leveraging this knowledge for model-based reinforcement learning (RL) holds the potential to improve sample efficiency over model-free methods by learning from imagined rollouts. Furthermore, because the latent space serves as input to behavior models, the informative representations learned by the world model facilitate efficient learning of desired skills. Most existing methods rely on holistic representations of the environment’s state. In contrast, humans reason about objects and their interactions, predicting how actions will affect specific parts of their surroundings. Inspired by this, we propose Slot-Attention for Object-centric Latent Dynamics (SOLD), a novel model-based RL algorithm that learns object-centric dynamics models in an unsupervised manner from pixel inputs. We demonstrate that the structured latent space not only improves model interpretability but also provides a valuable input space for behavior models to reason over. Our results show that SOLD outperforms DreamerV3 and TD-MPC2 - state-of-the-art model-based RL algorithms - across a range of multi-object manipulation environments that require both relational reasoning and dexterous control. Videos and code are available at https:// slot-latent-dynamics.github.io.} }
Endnote
%0 Conference Paper %T SOLD: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels %A Malte Mosbach %A Jan Niklas Ewertz %A Angel Villar-Corrales %A Sven Behnke %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-mosbach25a %I PMLR %P 44911--44935 %U https://proceedings.mlr.press/v267/mosbach25a.html %V 267 %X Learning a latent dynamics model provides a task-agnostic representation of an agent’s understanding of its environment. Leveraging this knowledge for model-based reinforcement learning (RL) holds the potential to improve sample efficiency over model-free methods by learning from imagined rollouts. Furthermore, because the latent space serves as input to behavior models, the informative representations learned by the world model facilitate efficient learning of desired skills. Most existing methods rely on holistic representations of the environment’s state. In contrast, humans reason about objects and their interactions, predicting how actions will affect specific parts of their surroundings. Inspired by this, we propose Slot-Attention for Object-centric Latent Dynamics (SOLD), a novel model-based RL algorithm that learns object-centric dynamics models in an unsupervised manner from pixel inputs. We demonstrate that the structured latent space not only improves model interpretability but also provides a valuable input space for behavior models to reason over. Our results show that SOLD outperforms DreamerV3 and TD-MPC2 - state-of-the-art model-based RL algorithms - across a range of multi-object manipulation environments that require both relational reasoning and dexterous control. Videos and code are available at https:// slot-latent-dynamics.github.io.
APA
Mosbach, M., Ewertz, J.N., Villar-Corrales, A. & Behnke, S.. (2025). SOLD: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:44911-44935 Available from https://proceedings.mlr.press/v267/mosbach25a.html.

Related Material