Lipschitz Continuity in Modelbased Reinforcement Learning
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:264273, 2018.
Abstract
We examine the impact of learning Lipschitz continuous models in the context of modelbased reinforcement learning. We provide a novel bound on multistep prediction error of Lipschitz models where we quantify the error using the Wasserstein metric. We go on to prove an error bound for the valuefunction estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz. We conclude with empirical results that show the benefits of controlling the Lipschitz constant of neuralnetwork models.
Related Material


