Surveillance Evasion Through Bayesian Reinforcement Learning

Dongping Qi, David Bindel, Alexander Vladimirsky
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:8448-8462, 2023.

Abstract

We consider a task of surveillance-evading path-planning in a continuous setting. An Evader strives to escape from a 2D domain while minimizing the risk of detection (and immediate capture). The probability of detection is path-dependent and determined by the spatially inhomogeneous surveillance intensity, which is fixed but a priori unknown and gradually learned in the multi-episodic setting. We introduce a Bayesian reinforcement learning algorithm that relies on a Gaussian Process regression (to model the surveillance intensity function based on the information from prior episodes), numerical methods for Hamilton-Jacobi PDEs (to plan the best continuous trajectories based on the current model), and Confidence Bounds (to balance the exploration vs exploitation). We use numerical experiments and regret metrics to highlight the significant advantages of our approach compared to traditional graph-based algorithms of reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-qi23a, title = {Surveillance Evasion Through Bayesian Reinforcement Learning}, author = {Qi, Dongping and Bindel, David and Vladimirsky, Alexander}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {8448--8462}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/qi23a/qi23a.pdf}, url = {https://proceedings.mlr.press/v206/qi23a.html}, abstract = {We consider a task of surveillance-evading path-planning in a continuous setting. An Evader strives to escape from a 2D domain while minimizing the risk of detection (and immediate capture). The probability of detection is path-dependent and determined by the spatially inhomogeneous surveillance intensity, which is fixed but a priori unknown and gradually learned in the multi-episodic setting. We introduce a Bayesian reinforcement learning algorithm that relies on a Gaussian Process regression (to model the surveillance intensity function based on the information from prior episodes), numerical methods for Hamilton-Jacobi PDEs (to plan the best continuous trajectories based on the current model), and Confidence Bounds (to balance the exploration vs exploitation). We use numerical experiments and regret metrics to highlight the significant advantages of our approach compared to traditional graph-based algorithms of reinforcement learning.} }
Endnote
%0 Conference Paper %T Surveillance Evasion Through Bayesian Reinforcement Learning %A Dongping Qi %A David Bindel %A Alexander Vladimirsky %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-qi23a %I PMLR %P 8448--8462 %U https://proceedings.mlr.press/v206/qi23a.html %V 206 %X We consider a task of surveillance-evading path-planning in a continuous setting. An Evader strives to escape from a 2D domain while minimizing the risk of detection (and immediate capture). The probability of detection is path-dependent and determined by the spatially inhomogeneous surveillance intensity, which is fixed but a priori unknown and gradually learned in the multi-episodic setting. We introduce a Bayesian reinforcement learning algorithm that relies on a Gaussian Process regression (to model the surveillance intensity function based on the information from prior episodes), numerical methods for Hamilton-Jacobi PDEs (to plan the best continuous trajectories based on the current model), and Confidence Bounds (to balance the exploration vs exploitation). We use numerical experiments and regret metrics to highlight the significant advantages of our approach compared to traditional graph-based algorithms of reinforcement learning.
APA
Qi, D., Bindel, D. & Vladimirsky, A.. (2023). Surveillance Evasion Through Bayesian Reinforcement Learning. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:8448-8462 Available from https://proceedings.mlr.press/v206/qi23a.html.

Related Material