[edit]
Simple Randomized Rounding for Max-Min Eigenvalue Augmentation
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:32406-32419, 2025.
Abstract
We consider the max-min eigenvalue augmentation problem: given $n \times n$ symmetric positive semidefinite matrices $M,A_1,\ldots, A_m$ and a positive integer $k < m$, the goal is to choose a subset $I \subset \{1,\ldots,m\}$ of cardinality at most $k$ that maximizes the minimum eigenvalue of the matrix $M + \sum_{i \in I} A_i$. The problem captures both the Bayesian E-optimal design and maximum algebraic connectivity augmentation problems. In contrast to the existing work, we do not assume that the augmentation matrices are rank-one matrices, and we focus on the setting in which $k < n$. We show that a simple randomized rounding method provides a constant-factor approximation if the optimal increase is sufficiently large, specifically, if $\mathrm{OPT} - \lambda_{\mathrm{min}}(M) = \Omega(R \ln k)$, where $\mathrm{OPT}$ is the optimal value, and $R$ is the maximum trace of an augmentation matrix. To establish the guarantee, we derive a matrix concentration inequality that is of independent interest. The inequality can be interpreted as an intrinsic dimension analog of the matrix Chernoff inequality for the minimum eigenvalue of a sum of independent random positive semidefinite matrices; such an inequality has already been established for the maximum eigenvalue, but not for the minimum eigenvalue.