XOR-SGD: provable convex stochastic optimization for decision-making under uncertainty

Fan Ding, Yexiang Xue
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:151-160, 2021.

Abstract

Many decision-making problems under uncertainty can be formulated as convex stochastic optimization, which minimizes a convex objective in expectation across exponentially many probabilistic scenarios. Despite its convexity, evaluating the objective function is #P-hard. Previous approaches use samples from MCMC and its variants to approximate the objective function but have a slow mixing rate. We present XOR-SGD, a stochastic gradient descent (SGD) approach guaranteed to converge to solutions that are at most a constant away from the true optimum in linear number of iterations. XOR-SGD harnesses XOR-sampling, which reduces the sample approximation of the expectation into queries of NP oracles via hashing and projection. We evaluate XOR-SGD on two real-world applications. The first stochastic inventory management problem searches for a robust inventory management plan in preparation for the virus pandemics, natural disasters, etc. The second network design problem decides an optimal land conservation plan which promotes the free movement of wild-life animals. We show that our approach finds better solutions with drastically fewer samples needed compared to a couple of state-of-the-art solvers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-ding21a, title = {XOR-SGD: provable convex stochastic optimization for decision-making under uncertainty}, author = {Ding, Fan and Xue, Yexiang}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {151--160}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/ding21a/ding21a.pdf}, url = {https://proceedings.mlr.press/v161/ding21a.html}, abstract = {Many decision-making problems under uncertainty can be formulated as convex stochastic optimization, which minimizes a convex objective in expectation across exponentially many probabilistic scenarios. Despite its convexity, evaluating the objective function is #P-hard. Previous approaches use samples from MCMC and its variants to approximate the objective function but have a slow mixing rate. We present XOR-SGD, a stochastic gradient descent (SGD) approach guaranteed to converge to solutions that are at most a constant away from the true optimum in linear number of iterations. XOR-SGD harnesses XOR-sampling, which reduces the sample approximation of the expectation into queries of NP oracles via hashing and projection. We evaluate XOR-SGD on two real-world applications. The first stochastic inventory management problem searches for a robust inventory management plan in preparation for the virus pandemics, natural disasters, etc. The second network design problem decides an optimal land conservation plan which promotes the free movement of wild-life animals. We show that our approach finds better solutions with drastically fewer samples needed compared to a couple of state-of-the-art solvers.} }
Endnote
%0 Conference Paper %T XOR-SGD: provable convex stochastic optimization for decision-making under uncertainty %A Fan Ding %A Yexiang Xue %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-ding21a %I PMLR %P 151--160 %U https://proceedings.mlr.press/v161/ding21a.html %V 161 %X Many decision-making problems under uncertainty can be formulated as convex stochastic optimization, which minimizes a convex objective in expectation across exponentially many probabilistic scenarios. Despite its convexity, evaluating the objective function is #P-hard. Previous approaches use samples from MCMC and its variants to approximate the objective function but have a slow mixing rate. We present XOR-SGD, a stochastic gradient descent (SGD) approach guaranteed to converge to solutions that are at most a constant away from the true optimum in linear number of iterations. XOR-SGD harnesses XOR-sampling, which reduces the sample approximation of the expectation into queries of NP oracles via hashing and projection. We evaluate XOR-SGD on two real-world applications. The first stochastic inventory management problem searches for a robust inventory management plan in preparation for the virus pandemics, natural disasters, etc. The second network design problem decides an optimal land conservation plan which promotes the free movement of wild-life animals. We show that our approach finds better solutions with drastically fewer samples needed compared to a couple of state-of-the-art solvers.
APA
Ding, F. & Xue, Y.. (2021). XOR-SGD: provable convex stochastic optimization for decision-making under uncertainty. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:151-160 Available from https://proceedings.mlr.press/v161/ding21a.html.

Related Material