Novelty Detection in Reinforcement Learning with World Models

Geigh Zollicoffer, Kenneth Eaton, Jonathan C Balloch, Julia Kim, Wei Zhou, Robert Wright, Mark Riedl
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:80740-80758, 2025.

Abstract

Reinforcement learning (RL) using world models has found significant recent successes. However, when a sudden change to world mechanics or properties occurs then agent performance and reliability can dramatically decline. We refer to the sudden change in visual properties or state transitions as novelties. Implementing novelty detection within generated world model frameworks is a crucial task for protecting the agent when deployed. In this paper, we propose straightforward bounding approaches to incorporate novelty detection into world model RL agents by utilizing the misalignment of the world model’s hallucinated states and the true observed states as a novelty score. We provide effective approaches to detecting novelties in a distribution of transitions learned by an agent in a world model. Finally, we show the advantage of our work in a novel environment compared to traditional machine learning novelty detection methods as well as currently accepted RL-focused novelty detection algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zollicoffer25a, title = {Novelty Detection in Reinforcement Learning with World Models}, author = {Zollicoffer, Geigh and Eaton, Kenneth and Balloch, Jonathan C and Kim, Julia and Zhou, Wei and Wright, Robert and Riedl, Mark}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {80740--80758}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zollicoffer25a/zollicoffer25a.pdf}, url = {https://proceedings.mlr.press/v267/zollicoffer25a.html}, abstract = {Reinforcement learning (RL) using world models has found significant recent successes. However, when a sudden change to world mechanics or properties occurs then agent performance and reliability can dramatically decline. We refer to the sudden change in visual properties or state transitions as novelties. Implementing novelty detection within generated world model frameworks is a crucial task for protecting the agent when deployed. In this paper, we propose straightforward bounding approaches to incorporate novelty detection into world model RL agents by utilizing the misalignment of the world model’s hallucinated states and the true observed states as a novelty score. We provide effective approaches to detecting novelties in a distribution of transitions learned by an agent in a world model. Finally, we show the advantage of our work in a novel environment compared to traditional machine learning novelty detection methods as well as currently accepted RL-focused novelty detection algorithms.} }
Endnote
%0 Conference Paper %T Novelty Detection in Reinforcement Learning with World Models %A Geigh Zollicoffer %A Kenneth Eaton %A Jonathan C Balloch %A Julia Kim %A Wei Zhou %A Robert Wright %A Mark Riedl %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zollicoffer25a %I PMLR %P 80740--80758 %U https://proceedings.mlr.press/v267/zollicoffer25a.html %V 267 %X Reinforcement learning (RL) using world models has found significant recent successes. However, when a sudden change to world mechanics or properties occurs then agent performance and reliability can dramatically decline. We refer to the sudden change in visual properties or state transitions as novelties. Implementing novelty detection within generated world model frameworks is a crucial task for protecting the agent when deployed. In this paper, we propose straightforward bounding approaches to incorporate novelty detection into world model RL agents by utilizing the misalignment of the world model’s hallucinated states and the true observed states as a novelty score. We provide effective approaches to detecting novelties in a distribution of transitions learned by an agent in a world model. Finally, we show the advantage of our work in a novel environment compared to traditional machine learning novelty detection methods as well as currently accepted RL-focused novelty detection algorithms.
APA
Zollicoffer, G., Eaton, K., Balloch, J.C., Kim, J., Zhou, W., Wright, R. & Riedl, M.. (2025). Novelty Detection in Reinforcement Learning with World Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:80740-80758 Available from https://proceedings.mlr.press/v267/zollicoffer25a.html.

Related Material