Online Convex Optimization with Perturbed Constraints: Optimal Rates against Stronger Benchmarks

Victor Valls, George Iosifidis, Douglas Leith, Leandros Tassiulas
; Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2885-2895, 2020.

Abstract

This paper studies Online Convex Optimization (OCO) problems where the constraints have additive perturbations that (i) vary over time and (ii) are not known at the time to make a decision. Perturbations may not be i.i.d. generated and can be used, for example, to model a time-varying budget or time-varying requests in resource allocation problems. Our goal is to design a policy that obtains sublinear regret and satisfies the constraints in the long-term. To this end, we present an online primal-dual proximal gradient algorithm that has $O(T^\epsilon \vee T^{1-\epsilon})$ regret and $O(T^\epsilon)$ constraint violation, where $\epsilon \in [0,1)$ is a parameter in the learning rate. The proposed algorithm obtains optimal rates when $\epsilon = 1/2$, and can compare against a stronger comparator (the set of fixed decisions in hindsight) than previous work.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-valls20a, title = {Online Convex Optimization with Perturbed Constraints: Optimal Rates against Stronger Benchmarks}, author = {Valls, Victor and Iosifidis, George and Leith, Douglas and Tassiulas, Leandros}, pages = {2885--2895}, year = {2020}, editor = {Silvia Chiappa and Roberto Calandra}, volume = {108}, series = {Proceedings of Machine Learning Research}, address = {Online}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/valls20a/valls20a.pdf}, url = {http://proceedings.mlr.press/v108/valls20a.html}, abstract = {This paper studies Online Convex Optimization (OCO) problems where the constraints have additive perturbations that (i) vary over time and (ii) are not known at the time to make a decision. Perturbations may not be i.i.d. generated and can be used, for example, to model a time-varying budget or time-varying requests in resource allocation problems. Our goal is to design a policy that obtains sublinear regret and satisfies the constraints in the long-term. To this end, we present an online primal-dual proximal gradient algorithm that has $O(T^\epsilon \vee T^{1-\epsilon})$ regret and $O(T^\epsilon)$ constraint violation, where $\epsilon \in [0,1)$ is a parameter in the learning rate. The proposed algorithm obtains optimal rates when $\epsilon = 1/2$, and can compare against a stronger comparator (the set of fixed decisions in hindsight) than previous work. } }
Endnote
%0 Conference Paper %T Online Convex Optimization with Perturbed Constraints: Optimal Rates against Stronger Benchmarks %A Victor Valls %A George Iosifidis %A Douglas Leith %A Leandros Tassiulas %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-valls20a %I PMLR %J Proceedings of Machine Learning Research %P 2885--2895 %U http://proceedings.mlr.press %V 108 %W PMLR %X This paper studies Online Convex Optimization (OCO) problems where the constraints have additive perturbations that (i) vary over time and (ii) are not known at the time to make a decision. Perturbations may not be i.i.d. generated and can be used, for example, to model a time-varying budget or time-varying requests in resource allocation problems. Our goal is to design a policy that obtains sublinear regret and satisfies the constraints in the long-term. To this end, we present an online primal-dual proximal gradient algorithm that has $O(T^\epsilon \vee T^{1-\epsilon})$ regret and $O(T^\epsilon)$ constraint violation, where $\epsilon \in [0,1)$ is a parameter in the learning rate. The proposed algorithm obtains optimal rates when $\epsilon = 1/2$, and can compare against a stronger comparator (the set of fixed decisions in hindsight) than previous work.
APA
Valls, V., Iosifidis, G., Leith, D. & Tassiulas, L.. (2020). Online Convex Optimization with Perturbed Constraints: Optimal Rates against Stronger Benchmarks. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in PMLR 108:2885-2895

Related Material