Learning to Collaborate in Markov Decision Processes

Goran Radanovic, Rati Devidze, David Parkes, Adish Singla
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5261-5270, 2019.

Abstract

We consider a two-agent MDP framework where agents repeatedly solve a task in a collaborative setting. We study the problem of designing a learning algorithm for the first agent (A1) that facilitates a successful collaboration even in cases when the second agent (A2) is adapting its policy in an unknown way. The key challenge in our setting is that the first agent faces non-stationarity in rewards and transitions because of the adaptive behavior of the second agent. We design novel online learning algorithms for agent A1 whose regret decays as $O(T^{1-\frac{3}{7} \cdot \alpha})$ with $T$ learning episodes provided that the magnitude of agent A2’s policy changes between any two consecutive episodes are upper bounded by $O(T^{-\alpha})$. Here, the parameter $\alpha$ is assumed to be strictly greater than $0$, and we show that this assumption is necessary provided that the learning parity with noise problem is computationally hard. We show that sub-linear regret of agent A1 further implies near-optimality of the agents’ joint return for MDPs that manifest the properties of a smooth game.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-radanovic19a, title = {Learning to Collaborate in {M}arkov Decision Processes}, author = {Radanovic, Goran and Devidze, Rati and Parkes, David and Singla, Adish}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5261--5270}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/radanovic19a/radanovic19a.pdf}, url = {https://proceedings.mlr.press/v97/radanovic19a.html}, abstract = {We consider a two-agent MDP framework where agents repeatedly solve a task in a collaborative setting. We study the problem of designing a learning algorithm for the first agent (A1) that facilitates a successful collaboration even in cases when the second agent (A2) is adapting its policy in an unknown way. The key challenge in our setting is that the first agent faces non-stationarity in rewards and transitions because of the adaptive behavior of the second agent. We design novel online learning algorithms for agent A1 whose regret decays as $O(T^{1-\frac{3}{7} \cdot \alpha})$ with $T$ learning episodes provided that the magnitude of agent A2’s policy changes between any two consecutive episodes are upper bounded by $O(T^{-\alpha})$. Here, the parameter $\alpha$ is assumed to be strictly greater than $0$, and we show that this assumption is necessary provided that the learning parity with noise problem is computationally hard. We show that sub-linear regret of agent A1 further implies near-optimality of the agents’ joint return for MDPs that manifest the properties of a smooth game.} }
Endnote
%0 Conference Paper %T Learning to Collaborate in Markov Decision Processes %A Goran Radanovic %A Rati Devidze %A David Parkes %A Adish Singla %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-radanovic19a %I PMLR %P 5261--5270 %U https://proceedings.mlr.press/v97/radanovic19a.html %V 97 %X We consider a two-agent MDP framework where agents repeatedly solve a task in a collaborative setting. We study the problem of designing a learning algorithm for the first agent (A1) that facilitates a successful collaboration even in cases when the second agent (A2) is adapting its policy in an unknown way. The key challenge in our setting is that the first agent faces non-stationarity in rewards and transitions because of the adaptive behavior of the second agent. We design novel online learning algorithms for agent A1 whose regret decays as $O(T^{1-\frac{3}{7} \cdot \alpha})$ with $T$ learning episodes provided that the magnitude of agent A2’s policy changes between any two consecutive episodes are upper bounded by $O(T^{-\alpha})$. Here, the parameter $\alpha$ is assumed to be strictly greater than $0$, and we show that this assumption is necessary provided that the learning parity with noise problem is computationally hard. We show that sub-linear regret of agent A1 further implies near-optimality of the agents’ joint return for MDPs that manifest the properties of a smooth game.
APA
Radanovic, G., Devidze, R., Parkes, D. & Singla, A.. (2019). Learning to Collaborate in Markov Decision Processes. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5261-5270 Available from https://proceedings.mlr.press/v97/radanovic19a.html.

Related Material