No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes

Minbiao Han, Fengxue Zhang, Yuxin Chen
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:1541-1557, 2024.

Abstract

This paper investigates the challenge of learning in black-box games, where the underlying utility function is unknown to any of the agents. While there is an extensive body of literature on the theoretical analysis of algorithms for computing the Nash equilibrium with *complete information* about the game, studies on Nash equilibrium in *black-box* games are less common. In this paper, we focus on learning the Nash equilibrium when the only available information about an agent’s payoff comes in the form of empirical queries. We provide a no-regret learning algorithm that utilizes Gaussian processes to identify equilibria in such games. Our approach not only ensures a theoretical convergence rate but also demonstrates effectiveness across a variety collection of games through experimental validation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-han24b, title = {No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes}, author = {Han, Minbiao and Zhang, Fengxue and Chen, Yuxin}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {1541--1557}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/han24b/han24b.pdf}, url = {https://proceedings.mlr.press/v244/han24b.html}, abstract = {This paper investigates the challenge of learning in black-box games, where the underlying utility function is unknown to any of the agents. While there is an extensive body of literature on the theoretical analysis of algorithms for computing the Nash equilibrium with *complete information* about the game, studies on Nash equilibrium in *black-box* games are less common. In this paper, we focus on learning the Nash equilibrium when the only available information about an agent’s payoff comes in the form of empirical queries. We provide a no-regret learning algorithm that utilizes Gaussian processes to identify equilibria in such games. Our approach not only ensures a theoretical convergence rate but also demonstrates effectiveness across a variety collection of games through experimental validation.} }
Endnote
%0 Conference Paper %T No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes %A Minbiao Han %A Fengxue Zhang %A Yuxin Chen %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-han24b %I PMLR %P 1541--1557 %U https://proceedings.mlr.press/v244/han24b.html %V 244 %X This paper investigates the challenge of learning in black-box games, where the underlying utility function is unknown to any of the agents. While there is an extensive body of literature on the theoretical analysis of algorithms for computing the Nash equilibrium with *complete information* about the game, studies on Nash equilibrium in *black-box* games are less common. In this paper, we focus on learning the Nash equilibrium when the only available information about an agent’s payoff comes in the form of empirical queries. We provide a no-regret learning algorithm that utilizes Gaussian processes to identify equilibria in such games. Our approach not only ensures a theoretical convergence rate but also demonstrates effectiveness across a variety collection of games through experimental validation.
APA
Han, M., Zhang, F. & Chen, Y.. (2024). No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:1541-1557 Available from https://proceedings.mlr.press/v244/han24b.html.

Related Material