A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks

Thomas Schmied, Thomas Adler, Vihang Prakash Patil, Maximilian Beck, Korbinian Pöppel, Johannes Brandstetter, Günter Klambauer, Razvan Pascanu, Sepp Hochreiter
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:53343-53387, 2025.

Abstract

In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which results in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-schmied25a, title = {A Large Recurrent Action Model: x{LSTM} enables Fast Inference for Robotics Tasks}, author = {Schmied, Thomas and Adler, Thomas and Patil, Vihang Prakash and Beck, Maximilian and P\"{o}ppel, Korbinian and Brandstetter, Johannes and Klambauer, G\"{u}nter and Pascanu, Razvan and Hochreiter, Sepp}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {53343--53387}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/schmied25a/schmied25a.pdf}, url = {https://proceedings.mlr.press/v267/schmied25a.html}, abstract = {In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which results in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.} }
Endnote
%0 Conference Paper %T A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks %A Thomas Schmied %A Thomas Adler %A Vihang Prakash Patil %A Maximilian Beck %A Korbinian Pöppel %A Johannes Brandstetter %A Günter Klambauer %A Razvan Pascanu %A Sepp Hochreiter %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-schmied25a %I PMLR %P 53343--53387 %U https://proceedings.mlr.press/v267/schmied25a.html %V 267 %X In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which results in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.
APA
Schmied, T., Adler, T., Patil, V.P., Beck, M., Pöppel, K., Brandstetter, J., Klambauer, G., Pascanu, R. & Hochreiter, S.. (2025). A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:53343-53387 Available from https://proceedings.mlr.press/v267/schmied25a.html.

Related Material