Multitask Online Learning: Listen to the Neighborhood Buzz

Juliette Achddou, Nicolò Cesa-Bianchi, Pierre Laforgue
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1846-1854, 2024.

Abstract

We study multitask online learning in a setting where agents can only exchange information with their neighbors on an arbitrary communication network. We introduce MT-CO\textsubscript{2}OL, a decentralized algorithm for this setting whose regret depends on the interplay between the task similarities and the network structure. Our analysis shows that the regret of MT-CO\textsubscript{2}OL is never worse (up to constants) than the bound obtained when agents do not share information. On the other hand, our bounds significantly improve when neighboring agents operate on similar tasks. In addition, we prove that our algorithm can be made differentially private with a negligible impact on the regret. Finally, we provide experimental support for our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-achddou24a, title = {Multitask Online Learning: Listen to the Neighborhood Buzz}, author = {Achddou, Juliette and Cesa-Bianchi, Nicol\`{o} and Laforgue, Pierre}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1846--1854}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/achddou24a/achddou24a.pdf}, url = {https://proceedings.mlr.press/v238/achddou24a.html}, abstract = {We study multitask online learning in a setting where agents can only exchange information with their neighbors on an arbitrary communication network. We introduce MT-CO\textsubscript{2}OL, a decentralized algorithm for this setting whose regret depends on the interplay between the task similarities and the network structure. Our analysis shows that the regret of MT-CO\textsubscript{2}OL is never worse (up to constants) than the bound obtained when agents do not share information. On the other hand, our bounds significantly improve when neighboring agents operate on similar tasks. In addition, we prove that our algorithm can be made differentially private with a negligible impact on the regret. Finally, we provide experimental support for our theory.} }
Endnote
%0 Conference Paper %T Multitask Online Learning: Listen to the Neighborhood Buzz %A Juliette Achddou %A Nicolò Cesa-Bianchi %A Pierre Laforgue %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-achddou24a %I PMLR %P 1846--1854 %U https://proceedings.mlr.press/v238/achddou24a.html %V 238 %X We study multitask online learning in a setting where agents can only exchange information with their neighbors on an arbitrary communication network. We introduce MT-CO\textsubscript{2}OL, a decentralized algorithm for this setting whose regret depends on the interplay between the task similarities and the network structure. Our analysis shows that the regret of MT-CO\textsubscript{2}OL is never worse (up to constants) than the bound obtained when agents do not share information. On the other hand, our bounds significantly improve when neighboring agents operate on similar tasks. In addition, we prove that our algorithm can be made differentially private with a negligible impact on the regret. Finally, we provide experimental support for our theory.
APA
Achddou, J., Cesa-Bianchi, N. & Laforgue, P.. (2024). Multitask Online Learning: Listen to the Neighborhood Buzz. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1846-1854 Available from https://proceedings.mlr.press/v238/achddou24a.html.

Related Material