Gradient and Projection Free Distributed Online Min-Max Resource Optimization

Jingrong Wang, Ben Liang
Proceedings of The 4th Annual Learning for Dynamics and Control Conference, PMLR 168:391-403, 2022.

Abstract

We consider distributed online min-max resource allocation with a set of parallel agents and a parameter server. Our goal is to minimize the pointwise maximum over a set of time-varying and decreasing cost functions, without a priori information about these functions. We propose a novel online algorithm, termed Distributed Online resource Re-Allocation (DORA), where non-stragglers learn to relinquish resource and share resource with stragglers. A notable feature of DORA is that it does not require gradient calculation or projection operation, unlike most existing online optimization strategies. This allows it to substantially reduce the computation overhead in large-scale and distributed networks. We show that the dynamic regret of the proposed algorithm is upper bounded by O(T^{3/4}(1+P_T)^{1/4}), where T is the total number of rounds and P_T is the path-length of the instantaneous minimizers. We further consider an application to the bandwidth allocation problem in distributed online machine learning. Our numerical study demonstrates the efficacy of the proposed solution and its performance advantage over gradient- and/or projection-based resource allocation algorithms in reducing wall-clock time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v168-wang22a, title = {Gradient and Projection Free Distributed Online Min-Max Resource Optimization}, author = {Wang, Jingrong and Liang, Ben}, booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference}, pages = {391--403}, year = {2022}, editor = {Firoozi, Roya and Mehr, Negar and Yel, Esen and Antonova, Rika and Bohg, Jeannette and Schwager, Mac and Kochenderfer, Mykel}, volume = {168}, series = {Proceedings of Machine Learning Research}, month = {23--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v168/wang22a/wang22a.pdf}, url = {https://proceedings.mlr.press/v168/wang22a.html}, abstract = {We consider distributed online min-max resource allocation with a set of parallel agents and a parameter server. Our goal is to minimize the pointwise maximum over a set of time-varying and decreasing cost functions, without a priori information about these functions. We propose a novel online algorithm, termed Distributed Online resource Re-Allocation (DORA), where non-stragglers learn to relinquish resource and share resource with stragglers. A notable feature of DORA is that it does not require gradient calculation or projection operation, unlike most existing online optimization strategies. This allows it to substantially reduce the computation overhead in large-scale and distributed networks. We show that the dynamic regret of the proposed algorithm is upper bounded by O(T^{3/4}(1+P_T)^{1/4}), where T is the total number of rounds and P_T is the path-length of the instantaneous minimizers. We further consider an application to the bandwidth allocation problem in distributed online machine learning. Our numerical study demonstrates the efficacy of the proposed solution and its performance advantage over gradient- and/or projection-based resource allocation algorithms in reducing wall-clock time.} }
Endnote
%0 Conference Paper %T Gradient and Projection Free Distributed Online Min-Max Resource Optimization %A Jingrong Wang %A Ben Liang %B Proceedings of The 4th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2022 %E Roya Firoozi %E Negar Mehr %E Esen Yel %E Rika Antonova %E Jeannette Bohg %E Mac Schwager %E Mykel Kochenderfer %F pmlr-v168-wang22a %I PMLR %P 391--403 %U https://proceedings.mlr.press/v168/wang22a.html %V 168 %X We consider distributed online min-max resource allocation with a set of parallel agents and a parameter server. Our goal is to minimize the pointwise maximum over a set of time-varying and decreasing cost functions, without a priori information about these functions. We propose a novel online algorithm, termed Distributed Online resource Re-Allocation (DORA), where non-stragglers learn to relinquish resource and share resource with stragglers. A notable feature of DORA is that it does not require gradient calculation or projection operation, unlike most existing online optimization strategies. This allows it to substantially reduce the computation overhead in large-scale and distributed networks. We show that the dynamic regret of the proposed algorithm is upper bounded by O(T^{3/4}(1+P_T)^{1/4}), where T is the total number of rounds and P_T is the path-length of the instantaneous minimizers. We further consider an application to the bandwidth allocation problem in distributed online machine learning. Our numerical study demonstrates the efficacy of the proposed solution and its performance advantage over gradient- and/or projection-based resource allocation algorithms in reducing wall-clock time.
APA
Wang, J. & Liang, B.. (2022). Gradient and Projection Free Distributed Online Min-Max Resource Optimization. Proceedings of The 4th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 168:391-403 Available from https://proceedings.mlr.press/v168/wang22a.html.

Related Material