Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications

Jun Wang, Hosein Hasanbeig, Kaiyuan Tan, Zihe Sun, Yiannis Kantaros
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:763-776, 2025.

Abstract

This paper addresses the problem of designing control policies for agents with unknown stochastic dynamics and control objectives specified using Linear Temporal Logic (LTL). Recent Deep Reinforcement Learning (DRL) algorithms have aimed to compute policies that maximize the satisfaction probability of LTL formulas, but they often suffer from slow learning performance. To address this, we introduce a novel Deep Q-learning algorithm that significantly improves learning speed. The enhanced sample efficiency stems from a mission-driven exploration strategy that prioritizes exploration towards directions likely to contribute to mission success. Identifying these directions relies on an automaton representation of the LTL task as well as a learned neural network that partially models the agent-environment interaction. We provide comparative experiments demonstrating the efficiency of our algorithm on robot navigation tasks in unseen environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-wang25d, title = {Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications}, author = {Wang, Jun and Hasanbeig, Hosein and Tan, Kaiyuan and Sun, Zihe and Kantaros, Yiannis}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {763--776}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/wang25d/wang25d.pdf}, url = {https://proceedings.mlr.press/v283/wang25d.html}, abstract = {This paper addresses the problem of designing control policies for agents with unknown stochastic dynamics and control objectives specified using Linear Temporal Logic (LTL). Recent Deep Reinforcement Learning (DRL) algorithms have aimed to compute policies that maximize the satisfaction probability of LTL formulas, but they often suffer from slow learning performance. To address this, we introduce a novel Deep Q-learning algorithm that significantly improves learning speed. The enhanced sample efficiency stems from a mission-driven exploration strategy that prioritizes exploration towards directions likely to contribute to mission success. Identifying these directions relies on an automaton representation of the LTL task as well as a learned neural network that partially models the agent-environment interaction. We provide comparative experiments demonstrating the efficiency of our algorithm on robot navigation tasks in unseen environments.} }
Endnote
%0 Conference Paper %T Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications %A Jun Wang %A Hosein Hasanbeig %A Kaiyuan Tan %A Zihe Sun %A Yiannis Kantaros %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-wang25d %I PMLR %P 763--776 %U https://proceedings.mlr.press/v283/wang25d.html %V 283 %X This paper addresses the problem of designing control policies for agents with unknown stochastic dynamics and control objectives specified using Linear Temporal Logic (LTL). Recent Deep Reinforcement Learning (DRL) algorithms have aimed to compute policies that maximize the satisfaction probability of LTL formulas, but they often suffer from slow learning performance. To address this, we introduce a novel Deep Q-learning algorithm that significantly improves learning speed. The enhanced sample efficiency stems from a mission-driven exploration strategy that prioritizes exploration towards directions likely to contribute to mission success. Identifying these directions relies on an automaton representation of the LTL task as well as a learned neural network that partially models the agent-environment interaction. We provide comparative experiments demonstrating the efficiency of our algorithm on robot navigation tasks in unseen environments.
APA
Wang, J., Hasanbeig, H., Tan, K., Sun, Z. & Kantaros, Y.. (2025). Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:763-776 Available from https://proceedings.mlr.press/v283/wang25d.html.

Related Material