Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation

Arnob Ghosh
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:1112-1124, 2023.

Abstract

We consider a multi-agent episodic MDP setup where an agent (leader) takes action at each step of the episode followed by another agent (follower). The state evolution and rewards depend on the joint action pair of the leader and the follower. Such type of interactions can find applications in many domains such as smart grids, mechanism design, security, and policymaking. We are interested in how to learn policies for both the players with provable performance guarantee under a bandit feedback setting. We focus on a setup where both the leader and followers are {\em non-myopic}, i.e., they both seek to maximize their rewards over the entire episode and consider a linear MDP which can model continuous state-space which is very common in many RL applications. We propose a {\em model-free} RL algorithm and show that $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret bounds can be achieved for both the leader and the follower, where $d$ is the dimension of the feature mapping, $H$ is the length of the episode, and $T$ is the total number of steps under the bandit feedback information setup. {\em Thus, our result holds even when the number of states becomes infinite}. The algorithm relies on {\em novel} adaptation of the LSVI-UCB algorithm. Specifically, we replace the standard greedy policy (as the best response) with the soft-max policy for both the leader and the follower. This turns out to be key in establishing uniform concentration bound for the value functions. To the best of our knowledge, this is the first sub-linear regret bound guarantee for the Markov games with non-myopic followers with function approximation. We also obtain $\tilde{\mathcal{O}}(\sqrt{d^3H^4/K})$-Coarse Correlated Stackelberg equilibrium.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-ghosh23a, title = {Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation}, author = {Ghosh, Arnob}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {1112--1124}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/ghosh23a/ghosh23a.pdf}, url = {https://proceedings.mlr.press/v211/ghosh23a.html}, abstract = {We consider a multi-agent episodic MDP setup where an agent (leader) takes action at each step of the episode followed by another agent (follower). The state evolution and rewards depend on the joint action pair of the leader and the follower. Such type of interactions can find applications in many domains such as smart grids, mechanism design, security, and policymaking. We are interested in how to learn policies for both the players with provable performance guarantee under a bandit feedback setting. We focus on a setup where both the leader and followers are {\em non-myopic}, i.e., they both seek to maximize their rewards over the entire episode and consider a linear MDP which can model continuous state-space which is very common in many RL applications. We propose a {\em model-free} RL algorithm and show that $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret bounds can be achieved for both the leader and the follower, where $d$ is the dimension of the feature mapping, $H$ is the length of the episode, and $T$ is the total number of steps under the bandit feedback information setup. {\em Thus, our result holds even when the number of states becomes infinite}. The algorithm relies on {\em novel} adaptation of the LSVI-UCB algorithm. Specifically, we replace the standard greedy policy (as the best response) with the soft-max policy for both the leader and the follower. This turns out to be key in establishing uniform concentration bound for the value functions. To the best of our knowledge, this is the first sub-linear regret bound guarantee for the Markov games with non-myopic followers with function approximation. We also obtain $\tilde{\mathcal{O}}(\sqrt{d^3H^4/K})$-Coarse Correlated Stackelberg equilibrium. } }
Endnote
%0 Conference Paper %T Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation %A Arnob Ghosh %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-ghosh23a %I PMLR %P 1112--1124 %U https://proceedings.mlr.press/v211/ghosh23a.html %V 211 %X We consider a multi-agent episodic MDP setup where an agent (leader) takes action at each step of the episode followed by another agent (follower). The state evolution and rewards depend on the joint action pair of the leader and the follower. Such type of interactions can find applications in many domains such as smart grids, mechanism design, security, and policymaking. We are interested in how to learn policies for both the players with provable performance guarantee under a bandit feedback setting. We focus on a setup where both the leader and followers are {\em non-myopic}, i.e., they both seek to maximize their rewards over the entire episode and consider a linear MDP which can model continuous state-space which is very common in many RL applications. We propose a {\em model-free} RL algorithm and show that $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret bounds can be achieved for both the leader and the follower, where $d$ is the dimension of the feature mapping, $H$ is the length of the episode, and $T$ is the total number of steps under the bandit feedback information setup. {\em Thus, our result holds even when the number of states becomes infinite}. The algorithm relies on {\em novel} adaptation of the LSVI-UCB algorithm. Specifically, we replace the standard greedy policy (as the best response) with the soft-max policy for both the leader and the follower. This turns out to be key in establishing uniform concentration bound for the value functions. To the best of our knowledge, this is the first sub-linear regret bound guarantee for the Markov games with non-myopic followers with function approximation. We also obtain $\tilde{\mathcal{O}}(\sqrt{d^3H^4/K})$-Coarse Correlated Stackelberg equilibrium.
APA
Ghosh, A.. (2023). Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:1112-1124 Available from https://proceedings.mlr.press/v211/ghosh23a.html.

Related Material