A deep learning approach for distributed aggregative optimization with users’ Feedback

Riccardo Brumali, Guido Carnevale, Giuseppe Notarstefano
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1552-1564, 2024.

Abstract

We propose a novel distributed data-driven scheme for online aggregative optimization, i.e., the framework in which agents in a network aim to cooperatively minimize the sum of local timevarying costs each depending on a local decision variable and an aggregation of all of them. We consider a ”personalized” setup in which each cost exhibits a term capturing the user’s dissatisfaction and, thus, is unknown. We enhance an existing distributed optimization scheme by endowing it with a learning mechanism based on neural networks that estimate the missing part of the gradient via users’ feedback about the cost. Our algorithm combines two loops with different timescales devoted to performing optimization and learning steps. In turn, the proposed scheme also embeds a distributed consensus mechanism aimed at locally reconstructing the unavailable global information due to the presence of the aggregative variable. We prove an upper bound for the dynamic regret related to (i) the initial conditions, (ii) the temporal variations of the functions, and (iii) the learning errors about the unknown cost. Finally, we test our method via numerical simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-brumali24a, title = {A deep learning approach for distributed aggregative optimization with users’ feedback}, author = {Brumali, Riccardo and Carnevale, Guido and Notarstefano, Giuseppe}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {1552--1564}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/brumali24a/brumali24a.pdf}, url = {https://proceedings.mlr.press/v242/brumali24a.html}, abstract = {We propose a novel distributed data-driven scheme for online aggregative optimization, i.e., the framework in which agents in a network aim to cooperatively minimize the sum of local timevarying costs each depending on a local decision variable and an aggregation of all of them. We consider a ”personalized” setup in which each cost exhibits a term capturing the user’s dissatisfaction and, thus, is unknown. We enhance an existing distributed optimization scheme by endowing it with a learning mechanism based on neural networks that estimate the missing part of the gradient via users’ feedback about the cost. Our algorithm combines two loops with different timescales devoted to performing optimization and learning steps. In turn, the proposed scheme also embeds a distributed consensus mechanism aimed at locally reconstructing the unavailable global information due to the presence of the aggregative variable. We prove an upper bound for the dynamic regret related to (i) the initial conditions, (ii) the temporal variations of the functions, and (iii) the learning errors about the unknown cost. Finally, we test our method via numerical simulations.} }
Endnote
%0 Conference Paper %T A deep learning approach for distributed aggregative optimization with users’ Feedback %A Riccardo Brumali %A Guido Carnevale %A Giuseppe Notarstefano %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-brumali24a %I PMLR %P 1552--1564 %U https://proceedings.mlr.press/v242/brumali24a.html %V 242 %X We propose a novel distributed data-driven scheme for online aggregative optimization, i.e., the framework in which agents in a network aim to cooperatively minimize the sum of local timevarying costs each depending on a local decision variable and an aggregation of all of them. We consider a ”personalized” setup in which each cost exhibits a term capturing the user’s dissatisfaction and, thus, is unknown. We enhance an existing distributed optimization scheme by endowing it with a learning mechanism based on neural networks that estimate the missing part of the gradient via users’ feedback about the cost. Our algorithm combines two loops with different timescales devoted to performing optimization and learning steps. In turn, the proposed scheme also embeds a distributed consensus mechanism aimed at locally reconstructing the unavailable global information due to the presence of the aggregative variable. We prove an upper bound for the dynamic regret related to (i) the initial conditions, (ii) the temporal variations of the functions, and (iii) the learning errors about the unknown cost. Finally, we test our method via numerical simulations.
APA
Brumali, R., Carnevale, G. & Notarstefano, G.. (2024). A deep learning approach for distributed aggregative optimization with users’ Feedback. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:1552-1564 Available from https://proceedings.mlr.press/v242/brumali24a.html.

Related Material