QORA: Zero-Shot Transfer via Interpretable Object-Relational Model Learning

Gabriel Stella, Dmitri Loguinov
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:46572-46590, 2024.

Abstract

Although neural networks have demonstrated significant success in various reinforcement-learning tasks, even the highest-performing deep models often fail to generalize. As an alternative, object-oriented approaches offer a promising path towards better efficiency and generalization; however, they typically address narrow problem classes and require extensive domain knowledge. To overcome these limitations, we introduce QORA, an algorithm that constructs models expressive enough to solve a variety of domains, including those with stochastic transition functions, directly from a domain-agnostic object-based state representation. We also provide a novel benchmark suite to evaluate learners’ generalization capabilities. In our test domains, QORA achieves 100% predictive accuracy using almost four orders of magnitude fewer observations than a neural-network baseline, demonstrates zero-shot transfer to modified environments, and adapts rapidly when applied to tasks involving previously unseen object interactions. Finally, we give examples of QORA’s learned rules, showing them to be easily interpretable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-stella24a, title = {{QORA}: Zero-Shot Transfer via Interpretable Object-Relational Model Learning}, author = {Stella, Gabriel and Loguinov, Dmitri}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {46572--46590}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/stella24a/stella24a.pdf}, url = {https://proceedings.mlr.press/v235/stella24a.html}, abstract = {Although neural networks have demonstrated significant success in various reinforcement-learning tasks, even the highest-performing deep models often fail to generalize. As an alternative, object-oriented approaches offer a promising path towards better efficiency and generalization; however, they typically address narrow problem classes and require extensive domain knowledge. To overcome these limitations, we introduce QORA, an algorithm that constructs models expressive enough to solve a variety of domains, including those with stochastic transition functions, directly from a domain-agnostic object-based state representation. We also provide a novel benchmark suite to evaluate learners’ generalization capabilities. In our test domains, QORA achieves 100% predictive accuracy using almost four orders of magnitude fewer observations than a neural-network baseline, demonstrates zero-shot transfer to modified environments, and adapts rapidly when applied to tasks involving previously unseen object interactions. Finally, we give examples of QORA’s learned rules, showing them to be easily interpretable.} }
Endnote
%0 Conference Paper %T QORA: Zero-Shot Transfer via Interpretable Object-Relational Model Learning %A Gabriel Stella %A Dmitri Loguinov %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-stella24a %I PMLR %P 46572--46590 %U https://proceedings.mlr.press/v235/stella24a.html %V 235 %X Although neural networks have demonstrated significant success in various reinforcement-learning tasks, even the highest-performing deep models often fail to generalize. As an alternative, object-oriented approaches offer a promising path towards better efficiency and generalization; however, they typically address narrow problem classes and require extensive domain knowledge. To overcome these limitations, we introduce QORA, an algorithm that constructs models expressive enough to solve a variety of domains, including those with stochastic transition functions, directly from a domain-agnostic object-based state representation. We also provide a novel benchmark suite to evaluate learners’ generalization capabilities. In our test domains, QORA achieves 100% predictive accuracy using almost four orders of magnitude fewer observations than a neural-network baseline, demonstrates zero-shot transfer to modified environments, and adapts rapidly when applied to tasks involving previously unseen object interactions. Finally, we give examples of QORA’s learned rules, showing them to be easily interpretable.
APA
Stella, G. & Loguinov, D.. (2024). QORA: Zero-Shot Transfer via Interpretable Object-Relational Model Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:46572-46590 Available from https://proceedings.mlr.press/v235/stella24a.html.

Related Material