On the Convergence of the Shapley Value in Parametric Bayesian Learning Games

Lucas Agussurja, Xinyi Xu, Bryan Kian Hsiang Low
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:180-196, 2022.

Abstract

Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. The effectiveness of our framework is demonstrated with experiments using real-world data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-agussurja22a, title = {On the Convergence of the Shapley Value in Parametric {B}ayesian Learning Games}, author = {Agussurja, Lucas and Xu, Xinyi and Low, Bryan Kian Hsiang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {180--196}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/agussurja22a/agussurja22a.pdf}, url = {https://proceedings.mlr.press/v162/agussurja22a.html}, abstract = {Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. The effectiveness of our framework is demonstrated with experiments using real-world data.} }
Endnote
%0 Conference Paper %T On the Convergence of the Shapley Value in Parametric Bayesian Learning Games %A Lucas Agussurja %A Xinyi Xu %A Bryan Kian Hsiang Low %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-agussurja22a %I PMLR %P 180--196 %U https://proceedings.mlr.press/v162/agussurja22a.html %V 162 %X Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. The effectiveness of our framework is demonstrated with experiments using real-world data.
APA
Agussurja, L., Xu, X. & Low, B.K.H.. (2022). On the Convergence of the Shapley Value in Parametric Bayesian Learning Games. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:180-196 Available from https://proceedings.mlr.press/v162/agussurja22a.html.

Related Material