Sequential Cooperative Bayesian Inference

Junqi Wang, Pei Wang, Patrick Shafto
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10039-10049, 2020.

Abstract

Cooperation is often implicitly assumed when learning from other agents. Cooperation implies that the agent selecting the data, and the agent learning from the data, have the same goal, that the learner infer the intended hypothesis. Recent models in human and machine learning have demonstrated the possibility of cooperation. We seek foundational theoretical results for cooperative inference by Bayesian agents through sequential data. We develop novel approaches analyzing consistency, rate of convergence and stability of Sequential Cooperative Bayesian Inference (SCBI). Our analysis of the effectiveness, sample efficiency and robustness show that cooperation is not only possible but theoretically well-founded. We discuss implications for human-human and human-machine cooperation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wang20u, title = {Sequential Cooperative {B}ayesian Inference}, author = {Wang, Junqi and Wang, Pei and Shafto, Patrick}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10039--10049}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wang20u/wang20u.pdf}, url = {https://proceedings.mlr.press/v119/wang20u.html}, abstract = {Cooperation is often implicitly assumed when learning from other agents. Cooperation implies that the agent selecting the data, and the agent learning from the data, have the same goal, that the learner infer the intended hypothesis. Recent models in human and machine learning have demonstrated the possibility of cooperation. We seek foundational theoretical results for cooperative inference by Bayesian agents through sequential data. We develop novel approaches analyzing consistency, rate of convergence and stability of Sequential Cooperative Bayesian Inference (SCBI). Our analysis of the effectiveness, sample efficiency and robustness show that cooperation is not only possible but theoretically well-founded. We discuss implications for human-human and human-machine cooperation.} }
Endnote
%0 Conference Paper %T Sequential Cooperative Bayesian Inference %A Junqi Wang %A Pei Wang %A Patrick Shafto %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wang20u %I PMLR %P 10039--10049 %U https://proceedings.mlr.press/v119/wang20u.html %V 119 %X Cooperation is often implicitly assumed when learning from other agents. Cooperation implies that the agent selecting the data, and the agent learning from the data, have the same goal, that the learner infer the intended hypothesis. Recent models in human and machine learning have demonstrated the possibility of cooperation. We seek foundational theoretical results for cooperative inference by Bayesian agents through sequential data. We develop novel approaches analyzing consistency, rate of convergence and stability of Sequential Cooperative Bayesian Inference (SCBI). Our analysis of the effectiveness, sample efficiency and robustness show that cooperation is not only possible but theoretically well-founded. We discuss implications for human-human and human-machine cooperation.
APA
Wang, J., Wang, P. & Shafto, P.. (2020). Sequential Cooperative Bayesian Inference. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10039-10049 Available from https://proceedings.mlr.press/v119/wang20u.html.

Related Material