Optimally Improving Cooperative Learning in a Social Setting

Shahrzad Haddadan, Cheng Xin, Jie Gao
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:17148-17188, 2024.

Abstract

We consider a cooperative learning scenario where a collection of networked agents with individually owned classifiers dynamically update their predictions, for the same classification task, through communication or observations of each other’s predictions. Clearly if highly influential vertices use erroneous classifiers, there will be a negative effect on the accuracy of all the agents in the network. We ask the following question: how can we optimally fix the prediction of a few classifiers so as maximize the overall accuracy in the entire network. To this end we consider an aggregate and an egalitarian objective function. We show a polynomial time algorithm for optimizing the aggregate objective function, and show that optimizing the egalitarian objective function is NP-hard. Furthermore, we develop approximation algorithms for the egalitarian improvement. The performance of all of our algorithms are guaranteed by mathematical analysis and backed by experiments on synthetic and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-haddadan24a, title = {Optimally Improving Cooperative Learning in a Social Setting}, author = {Haddadan, Shahrzad and Xin, Cheng and Gao, Jie}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {17148--17188}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/haddadan24a/haddadan24a.pdf}, url = {https://proceedings.mlr.press/v235/haddadan24a.html}, abstract = {We consider a cooperative learning scenario where a collection of networked agents with individually owned classifiers dynamically update their predictions, for the same classification task, through communication or observations of each other’s predictions. Clearly if highly influential vertices use erroneous classifiers, there will be a negative effect on the accuracy of all the agents in the network. We ask the following question: how can we optimally fix the prediction of a few classifiers so as maximize the overall accuracy in the entire network. To this end we consider an aggregate and an egalitarian objective function. We show a polynomial time algorithm for optimizing the aggregate objective function, and show that optimizing the egalitarian objective function is NP-hard. Furthermore, we develop approximation algorithms for the egalitarian improvement. The performance of all of our algorithms are guaranteed by mathematical analysis and backed by experiments on synthetic and real data.} }
Endnote
%0 Conference Paper %T Optimally Improving Cooperative Learning in a Social Setting %A Shahrzad Haddadan %A Cheng Xin %A Jie Gao %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-haddadan24a %I PMLR %P 17148--17188 %U https://proceedings.mlr.press/v235/haddadan24a.html %V 235 %X We consider a cooperative learning scenario where a collection of networked agents with individually owned classifiers dynamically update their predictions, for the same classification task, through communication or observations of each other’s predictions. Clearly if highly influential vertices use erroneous classifiers, there will be a negative effect on the accuracy of all the agents in the network. We ask the following question: how can we optimally fix the prediction of a few classifiers so as maximize the overall accuracy in the entire network. To this end we consider an aggregate and an egalitarian objective function. We show a polynomial time algorithm for optimizing the aggregate objective function, and show that optimizing the egalitarian objective function is NP-hard. Furthermore, we develop approximation algorithms for the egalitarian improvement. The performance of all of our algorithms are guaranteed by mathematical analysis and backed by experiments on synthetic and real data.
APA
Haddadan, S., Xin, C. & Gao, J.. (2024). Optimally Improving Cooperative Learning in a Social Setting. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:17148-17188 Available from https://proceedings.mlr.press/v235/haddadan24a.html.

Related Material