Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach

Muhammad Abdullah Naeem
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:383-394, 2023.

Abstract

Via operator theoretic methods, we formalize the concentration phenomenon for a given observable ‘$r$’ of a discrete time Markov chain with ‘$\mu_{\pi}$’ as invariant ergodic measure, possibly having support on an unbounded state space. The main contribution of this paper is circumventing tedious probabilistic methods with a study of a composition of the Markov transition operator $P$ followed by a multiplication operator defined by $e^{r}$. It turns out that even if the observable/ reward function is unbounded, but for some for some $q>2$, $\|e^{r}\|_{q \rightarrow 2} \propto \exp\big(\mu_{\pi}(r) +\frac{2q}{q-2}\big) $ and $P$ is hyperbounded with norm control $\|P\|_{2 \rightarrow q }< e^{\frac{1}{2}[\frac{1}{2}-\frac{1}{q}]}$, sharp non-asymptotic concentration bounds follow. \emph{Transport-entropy} inequality ensures the aforementioned upper bound on multiplication operator for all $q>2$. The role of \emph{reversibility} in concentration phenomenon is demystified. These results are particularly useful for the reinforcement learning and controls communities as they allow for concentration inequalities w.r.t standard unbounded obersvables/reward functions where exact knowledge of the system is not available, let alone the reversibility of stationary measure.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-naeem23a, title = {Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach}, author = {Naeem, Muhammad Abdullah}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {383--394}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/naeem23a/naeem23a.pdf}, url = {https://proceedings.mlr.press/v211/naeem23a.html}, abstract = {Via operator theoretic methods, we formalize the concentration phenomenon for a given observable ‘$r$’ of a discrete time Markov chain with ‘$\mu_{\pi}$’ as invariant ergodic measure, possibly having support on an unbounded state space. The main contribution of this paper is circumventing tedious probabilistic methods with a study of a composition of the Markov transition operator $P$ followed by a multiplication operator defined by $e^{r}$. It turns out that even if the observable/ reward function is unbounded, but for some for some $q>2$, $\|e^{r}\|_{q \rightarrow 2} \propto \exp\big(\mu_{\pi}(r) +\frac{2q}{q-2}\big) $ and $P$ is hyperbounded with norm control $\|P\|_{2 \rightarrow q }< e^{\frac{1}{2}[\frac{1}{2}-\frac{1}{q}]}$, sharp non-asymptotic concentration bounds follow. \emph{Transport-entropy} inequality ensures the aforementioned upper bound on multiplication operator for all $q>2$. The role of \emph{reversibility} in concentration phenomenon is demystified. These results are particularly useful for the reinforcement learning and controls communities as they allow for concentration inequalities w.r.t standard unbounded obersvables/reward functions where exact knowledge of the system is not available, let alone the reversibility of stationary measure. } }
Endnote
%0 Conference Paper %T Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach %A Muhammad Abdullah Naeem %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-naeem23a %I PMLR %P 383--394 %U https://proceedings.mlr.press/v211/naeem23a.html %V 211 %X Via operator theoretic methods, we formalize the concentration phenomenon for a given observable ‘$r$’ of a discrete time Markov chain with ‘$\mu_{\pi}$’ as invariant ergodic measure, possibly having support on an unbounded state space. The main contribution of this paper is circumventing tedious probabilistic methods with a study of a composition of the Markov transition operator $P$ followed by a multiplication operator defined by $e^{r}$. It turns out that even if the observable/ reward function is unbounded, but for some for some $q>2$, $\|e^{r}\|_{q \rightarrow 2} \propto \exp\big(\mu_{\pi}(r) +\frac{2q}{q-2}\big) $ and $P$ is hyperbounded with norm control $\|P\|_{2 \rightarrow q }< e^{\frac{1}{2}[\frac{1}{2}-\frac{1}{q}]}$, sharp non-asymptotic concentration bounds follow. \emph{Transport-entropy} inequality ensures the aforementioned upper bound on multiplication operator for all $q>2$. The role of \emph{reversibility} in concentration phenomenon is demystified. These results are particularly useful for the reinforcement learning and controls communities as they allow for concentration inequalities w.r.t standard unbounded obersvables/reward functions where exact knowledge of the system is not available, let alone the reversibility of stationary measure.
APA
Naeem, M.A.. (2023). Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic Approach. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:383-394 Available from https://proceedings.mlr.press/v211/naeem23a.html.

Related Material