Safe exploration in reproducing kernel Hilbert spaces

Abdullah Tokmak, Kiran G. Krishnan, Thomas B. Schön, Dominik Baumann
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:784-792, 2025.

Abstract

Popular safe Bayesian optimization (BO) algorithms learn control policies for safety-critical systems in unknown environments. However, most algorithms make a smoothness assumption, which is encoded by a known bounded norm in a reproducing kernel Hilbert space (RKHS). The RKHS is a potentially infinite-dimensional space, and it remains unclear how to reliably obtain the RKHS norm of an unknown function. In this work, we propose a safe BO algorithm capable of estimating the RKHS norm from data. We provide statistical guarantees on the RKHS norm estimation, integrate the estimated RKHS norm into existing confidence intervals and show that we retain theoretical guarantees, and prove safety of the resulting safe BO algorithm. We apply our algorithm to safely optimize reinforcement learning policies on physics simulators and on a real inverted pendulum, demonstrating improved performance, safety, and scalability compared to the state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-tokmak25a, title = {Safe exploration in reproducing kernel Hilbert spaces}, author = {Tokmak, Abdullah and Krishnan, Kiran G. and Sch{\"o}n, Thomas B. and Baumann, Dominik}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {784--792}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/tokmak25a/tokmak25a.pdf}, url = {https://proceedings.mlr.press/v258/tokmak25a.html}, abstract = {Popular safe Bayesian optimization (BO) algorithms learn control policies for safety-critical systems in unknown environments. However, most algorithms make a smoothness assumption, which is encoded by a known bounded norm in a reproducing kernel Hilbert space (RKHS). The RKHS is a potentially infinite-dimensional space, and it remains unclear how to reliably obtain the RKHS norm of an unknown function. In this work, we propose a safe BO algorithm capable of estimating the RKHS norm from data. We provide statistical guarantees on the RKHS norm estimation, integrate the estimated RKHS norm into existing confidence intervals and show that we retain theoretical guarantees, and prove safety of the resulting safe BO algorithm. We apply our algorithm to safely optimize reinforcement learning policies on physics simulators and on a real inverted pendulum, demonstrating improved performance, safety, and scalability compared to the state-of-the-art.} }
Endnote
%0 Conference Paper %T Safe exploration in reproducing kernel Hilbert spaces %A Abdullah Tokmak %A Kiran G. Krishnan %A Thomas B. Schön %A Dominik Baumann %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-tokmak25a %I PMLR %P 784--792 %U https://proceedings.mlr.press/v258/tokmak25a.html %V 258 %X Popular safe Bayesian optimization (BO) algorithms learn control policies for safety-critical systems in unknown environments. However, most algorithms make a smoothness assumption, which is encoded by a known bounded norm in a reproducing kernel Hilbert space (RKHS). The RKHS is a potentially infinite-dimensional space, and it remains unclear how to reliably obtain the RKHS norm of an unknown function. In this work, we propose a safe BO algorithm capable of estimating the RKHS norm from data. We provide statistical guarantees on the RKHS norm estimation, integrate the estimated RKHS norm into existing confidence intervals and show that we retain theoretical guarantees, and prove safety of the resulting safe BO algorithm. We apply our algorithm to safely optimize reinforcement learning policies on physics simulators and on a real inverted pendulum, demonstrating improved performance, safety, and scalability compared to the state-of-the-art.
APA
Tokmak, A., Krishnan, K.G., Schön, T.B. & Baumann, D.. (2025). Safe exploration in reproducing kernel Hilbert spaces. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:784-792 Available from https://proceedings.mlr.press/v258/tokmak25a.html.

Related Material