Simple Randomized Rounding for Max-Min Eigenvalue Augmentation

Jourdain Lamperski, Haeseong Yang, Oleg Prokopyev
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:32406-32419, 2025.

Abstract

We consider the max-min eigenvalue augmentation problem: given $n \times n$ symmetric positive semidefinite matrices $M,A_1,\ldots, A_m$ and a positive integer $k < m$, the goal is to choose a subset $I \subset \{1,\ldots,m\}$ of cardinality at most $k$ that maximizes the minimum eigenvalue of the matrix $M + \sum_{i \in I} A_i$. The problem captures both the Bayesian E-optimal design and maximum algebraic connectivity augmentation problems. In contrast to the existing work, we do not assume that the augmentation matrices are rank-one matrices, and we focus on the setting in which $k < n$. We show that a simple randomized rounding method provides a constant-factor approximation if the optimal increase is sufficiently large, specifically, if $\mathrm{OPT} - \lambda_{\mathrm{min}}(M) = \Omega(R \ln k)$, where $\mathrm{OPT}$ is the optimal value, and $R$ is the maximum trace of an augmentation matrix. To establish the guarantee, we derive a matrix concentration inequality that is of independent interest. The inequality can be interpreted as an intrinsic dimension analog of the matrix Chernoff inequality for the minimum eigenvalue of a sum of independent random positive semidefinite matrices; such an inequality has already been established for the maximum eigenvalue, but not for the minimum eigenvalue.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lamperski25a, title = {Simple Randomized Rounding for Max-Min Eigenvalue Augmentation}, author = {Lamperski, Jourdain and Yang, Haeseong and Prokopyev, Oleg}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {32406--32419}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lamperski25a/lamperski25a.pdf}, url = {https://proceedings.mlr.press/v267/lamperski25a.html}, abstract = {We consider the max-min eigenvalue augmentation problem: given $n \times n$ symmetric positive semidefinite matrices $M,A_1,\ldots, A_m$ and a positive integer $k < m$, the goal is to choose a subset $I \subset \{1,\ldots,m\}$ of cardinality at most $k$ that maximizes the minimum eigenvalue of the matrix $M + \sum_{i \in I} A_i$. The problem captures both the Bayesian E-optimal design and maximum algebraic connectivity augmentation problems. In contrast to the existing work, we do not assume that the augmentation matrices are rank-one matrices, and we focus on the setting in which $k < n$. We show that a simple randomized rounding method provides a constant-factor approximation if the optimal increase is sufficiently large, specifically, if $\mathrm{OPT} - \lambda_{\mathrm{min}}(M) = \Omega(R \ln k)$, where $\mathrm{OPT}$ is the optimal value, and $R$ is the maximum trace of an augmentation matrix. To establish the guarantee, we derive a matrix concentration inequality that is of independent interest. The inequality can be interpreted as an intrinsic dimension analog of the matrix Chernoff inequality for the minimum eigenvalue of a sum of independent random positive semidefinite matrices; such an inequality has already been established for the maximum eigenvalue, but not for the minimum eigenvalue.} }
Endnote
%0 Conference Paper %T Simple Randomized Rounding for Max-Min Eigenvalue Augmentation %A Jourdain Lamperski %A Haeseong Yang %A Oleg Prokopyev %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lamperski25a %I PMLR %P 32406--32419 %U https://proceedings.mlr.press/v267/lamperski25a.html %V 267 %X We consider the max-min eigenvalue augmentation problem: given $n \times n$ symmetric positive semidefinite matrices $M,A_1,\ldots, A_m$ and a positive integer $k < m$, the goal is to choose a subset $I \subset \{1,\ldots,m\}$ of cardinality at most $k$ that maximizes the minimum eigenvalue of the matrix $M + \sum_{i \in I} A_i$. The problem captures both the Bayesian E-optimal design and maximum algebraic connectivity augmentation problems. In contrast to the existing work, we do not assume that the augmentation matrices are rank-one matrices, and we focus on the setting in which $k < n$. We show that a simple randomized rounding method provides a constant-factor approximation if the optimal increase is sufficiently large, specifically, if $\mathrm{OPT} - \lambda_{\mathrm{min}}(M) = \Omega(R \ln k)$, where $\mathrm{OPT}$ is the optimal value, and $R$ is the maximum trace of an augmentation matrix. To establish the guarantee, we derive a matrix concentration inequality that is of independent interest. The inequality can be interpreted as an intrinsic dimension analog of the matrix Chernoff inequality for the minimum eigenvalue of a sum of independent random positive semidefinite matrices; such an inequality has already been established for the maximum eigenvalue, but not for the minimum eigenvalue.
APA
Lamperski, J., Yang, H. & Prokopyev, O.. (2025). Simple Randomized Rounding for Max-Min Eigenvalue Augmentation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:32406-32419 Available from https://proceedings.mlr.press/v267/lamperski25a.html.

Related Material