Learning and solving many-player games through a cluster-based representation

Sevan G. Ficici, David C. Parkes, Avi Pfeffer
Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, PMLR R6:188-195, 2008.

Abstract

In addressing the challenge of exponential scaling with the number of agents we adopt a cluster-based representation to approximately solve asymmetric games of very many players. A cluster groups together agents with a similar "strategic view" of the game. We learn the clustered approximation from data consisting of strategy profiles and payoffs, which may be obtained from observations of play or access to a simulator. Using our clustering we construct a reduced "twins" game in which each cluster is associated with two players of the reduced game. This allows our representation to be individually-responsive because we align the interests of every individual agent with the strategy of its cluster. Our approach provides agents with higher payoffs and lower regret on average than model-free methods as well as previous cluster-based methods, and requires only few observations for learning to be successful. The "twins" approach is shown to be an important component of providing these low regret approximations.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR6-ficici08a, title = {Learning and solving many-player games through a cluster-based representation}, author = {Ficici, Sevan G. and Parkes, David C. and Pfeffer, Avi}, booktitle = {Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence}, pages = {188--195}, year = {2008}, editor = {McAllester, David A. and Myllymäki, Petri}, volume = {R6}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/r6/main/assets/ficici08a/ficici08a.pdf}, url = {https://proceedings.mlr.press/r6/ficici08a.html}, abstract = {In addressing the challenge of exponential scaling with the number of agents we adopt a cluster-based representation to approximately solve asymmetric games of very many players. A cluster groups together agents with a similar "strategic view" of the game. We learn the clustered approximation from data consisting of strategy profiles and payoffs, which may be obtained from observations of play or access to a simulator. Using our clustering we construct a reduced "twins" game in which each cluster is associated with two players of the reduced game. This allows our representation to be individually-responsive because we align the interests of every individual agent with the strategy of its cluster. Our approach provides agents with higher payoffs and lower regret on average than model-free methods as well as previous cluster-based methods, and requires only few observations for learning to be successful. The "twins" approach is shown to be an important component of providing these low regret approximations.}, note = {Reissued by PMLR on 09 October 2024.} }
Endnote
%0 Conference Paper %T Learning and solving many-player games through a cluster-based representation %A Sevan G. Ficici %A David C. Parkes %A Avi Pfeffer %B Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2008 %E David A. McAllester %E Petri Myllymäki %F pmlr-vR6-ficici08a %I PMLR %P 188--195 %U https://proceedings.mlr.press/r6/ficici08a.html %V R6 %X In addressing the challenge of exponential scaling with the number of agents we adopt a cluster-based representation to approximately solve asymmetric games of very many players. A cluster groups together agents with a similar "strategic view" of the game. We learn the clustered approximation from data consisting of strategy profiles and payoffs, which may be obtained from observations of play or access to a simulator. Using our clustering we construct a reduced "twins" game in which each cluster is associated with two players of the reduced game. This allows our representation to be individually-responsive because we align the interests of every individual agent with the strategy of its cluster. Our approach provides agents with higher payoffs and lower regret on average than model-free methods as well as previous cluster-based methods, and requires only few observations for learning to be successful. The "twins" approach is shown to be an important component of providing these low regret approximations. %Z Reissued by PMLR on 09 October 2024.
APA
Ficici, S.G., Parkes, D.C. & Pfeffer, A.. (2008). Learning and solving many-player games through a cluster-based representation. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research R6:188-195 Available from https://proceedings.mlr.press/r6/ficici08a.html. Reissued by PMLR on 09 October 2024.

Related Material