A Scalable Frank-Wolfe-Based Algorithm for the Max-Cut SDP

Chi Bach Pham, Wynita Griggs, James Saunderson
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:27822-27839, 2023.

Abstract

We consider the problem of solving large-scale instances of the Max-Cut semidefinite program (SDP), i.e., optimizing a linear function over $n\times n$ positive semidefinite (PSD) matrices with unit diagonal. When the cost matrix is PSD, we show how to exactly reformulate the problem as maximizing a smooth concave function over PSD matrices with unit trace. By applying the Frank-Wolfe method, we obtain a simple algorithm that is compatible with recent sampling-based techniques to solve SDPs using low memory. We demonstrate the practical performance of our method on $10^6\times 10^6$ instances of the max-cut SDP with costs having up to $5 \times 10^6$ non-zero entries. Theoretically, we show that our method solves problems with diagonally dominant costs to relative error $\epsilon$ in $O(n\epsilon^{-1})$ calls to a randomized approximate largest eigenvalue subroutine, each of which succeeds with high probability after $O(\log(n)\epsilon^{-1/2})$ matrix-vector multiplications with the cost matrix.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-pham23a, title = {A Scalable Frank-{W}olfe-Based Algorithm for the Max-Cut {SDP}}, author = {Pham, Chi Bach and Griggs, Wynita and Saunderson, James}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {27822--27839}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/pham23a/pham23a.pdf}, url = {https://proceedings.mlr.press/v202/pham23a.html}, abstract = {We consider the problem of solving large-scale instances of the Max-Cut semidefinite program (SDP), i.e., optimizing a linear function over $n\times n$ positive semidefinite (PSD) matrices with unit diagonal. When the cost matrix is PSD, we show how to exactly reformulate the problem as maximizing a smooth concave function over PSD matrices with unit trace. By applying the Frank-Wolfe method, we obtain a simple algorithm that is compatible with recent sampling-based techniques to solve SDPs using low memory. We demonstrate the practical performance of our method on $10^6\times 10^6$ instances of the max-cut SDP with costs having up to $5 \times 10^6$ non-zero entries. Theoretically, we show that our method solves problems with diagonally dominant costs to relative error $\epsilon$ in $O(n\epsilon^{-1})$ calls to a randomized approximate largest eigenvalue subroutine, each of which succeeds with high probability after $O(\log(n)\epsilon^{-1/2})$ matrix-vector multiplications with the cost matrix.} }
Endnote
%0 Conference Paper %T A Scalable Frank-Wolfe-Based Algorithm for the Max-Cut SDP %A Chi Bach Pham %A Wynita Griggs %A James Saunderson %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-pham23a %I PMLR %P 27822--27839 %U https://proceedings.mlr.press/v202/pham23a.html %V 202 %X We consider the problem of solving large-scale instances of the Max-Cut semidefinite program (SDP), i.e., optimizing a linear function over $n\times n$ positive semidefinite (PSD) matrices with unit diagonal. When the cost matrix is PSD, we show how to exactly reformulate the problem as maximizing a smooth concave function over PSD matrices with unit trace. By applying the Frank-Wolfe method, we obtain a simple algorithm that is compatible with recent sampling-based techniques to solve SDPs using low memory. We demonstrate the practical performance of our method on $10^6\times 10^6$ instances of the max-cut SDP with costs having up to $5 \times 10^6$ non-zero entries. Theoretically, we show that our method solves problems with diagonally dominant costs to relative error $\epsilon$ in $O(n\epsilon^{-1})$ calls to a randomized approximate largest eigenvalue subroutine, each of which succeeds with high probability after $O(\log(n)\epsilon^{-1/2})$ matrix-vector multiplications with the cost matrix.
APA
Pham, C.B., Griggs, W. & Saunderson, J.. (2023). A Scalable Frank-Wolfe-Based Algorithm for the Max-Cut SDP. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:27822-27839 Available from https://proceedings.mlr.press/v202/pham23a.html.

Related Material