Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding

Xiao-Tong Yuan, Ping Li
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:3787-3813, 2020.

Abstract

Given a vector $w \in \mathbb{R}^p$ and a positive semi-definite matrix $A \in \mathbb{R}^{p\times p}$, we study the expansion ratio bound for the following defined Mahalanobis hard thresholding operator of $w$: \[ \mathcal{H}_{A,k}(w):=\argmin_{\|\theta\|_0\le k} \frac{1}{2}\|\theta - w\|^2_A, \]{where} $k\le p$ is the desired sparsity level. The core contribution of this paper is to prove that for any $\bar k$-sparse vector $\bar w$ with $\bar k < k$, the estimation error $\|\mathcal{H}_{A,k}(w) - \bar w\|_A$ satisfies \[ \|\mathcal{H}_{A,k}(w) - \bar w\|^2_A \le \left(1+ \mathcal{O}\left(\kappa(A,2k) \sqrt{\frac{\bar k }{k - \bar k}}\right)\right) \|{w} - \bar w\|^2_A, \]{where} $\kappa(A,2k)$ is the restricted strong condition number of $A$ over $(2k)$-sparse subspace. This estimation error bound is nearly non-expansive when $k$ is sufficiently larger than $\bar k$. Specially when $A$ is the identity matrix such that $\kappa(A,2k)\equiv1$, our bound recovers the previously known nearly non-expansive bounds for Euclidean hard thresholding operator. We further show that such a bound extends to an approximate version of $\mathcal{H}_{A,k}(w)$ estimated by Hard Thresholding Pursuit (HTP) algorithm. We demonstrate the applicability of these bounds to the mean squared error analysis of HTP and its novel extension based on preconditioning method. Numerical evidence is provided to support our theory and demonstrate the superiority of the proposed preconditioning HTP algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-yuan20a, title = {Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding}, author = {Yuan, Xiao-Tong and Li, Ping}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {3787--3813}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/yuan20a/yuan20a.pdf}, url = {https://proceedings.mlr.press/v125/yuan20a.html}, abstract = { Given a vector $w \in \mathbb{R}^p$ and a positive semi-definite matrix $A \in \mathbb{R}^{p\times p}$, we study the expansion ratio bound for the following defined Mahalanobis hard thresholding operator of $w$: \[ \mathcal{H}_{A,k}(w):=\argmin_{\|\theta\|_0\le k} \frac{1}{2}\|\theta - w\|^2_A, \]{where} $k\le p$ is the desired sparsity level. The core contribution of this paper is to prove that for any $\bar k$-sparse vector $\bar w$ with $\bar k < k$, the estimation error $\|\mathcal{H}_{A,k}(w) - \bar w\|_A$ satisfies \[ \|\mathcal{H}_{A,k}(w) - \bar w\|^2_A \le \left(1+ \mathcal{O}\left(\kappa(A,2k) \sqrt{\frac{\bar k }{k - \bar k}}\right)\right) \|{w} - \bar w\|^2_A, \]{where} $\kappa(A,2k)$ is the restricted strong condition number of $A$ over $(2k)$-sparse subspace. This estimation error bound is nearly non-expansive when $k$ is sufficiently larger than $\bar k$. Specially when $A$ is the identity matrix such that $\kappa(A,2k)\equiv1$, our bound recovers the previously known nearly non-expansive bounds for Euclidean hard thresholding operator. We further show that such a bound extends to an approximate version of $\mathcal{H}_{A,k}(w)$ estimated by Hard Thresholding Pursuit (HTP) algorithm. We demonstrate the applicability of these bounds to the mean squared error analysis of HTP and its novel extension based on preconditioning method. Numerical evidence is provided to support our theory and demonstrate the superiority of the proposed preconditioning HTP algorithm. } }
Endnote
%0 Conference Paper %T Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding %A Xiao-Tong Yuan %A Ping Li %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-yuan20a %I PMLR %P 3787--3813 %U https://proceedings.mlr.press/v125/yuan20a.html %V 125 %X Given a vector $w \in \mathbb{R}^p$ and a positive semi-definite matrix $A \in \mathbb{R}^{p\times p}$, we study the expansion ratio bound for the following defined Mahalanobis hard thresholding operator of $w$: \[ \mathcal{H}_{A,k}(w):=\argmin_{\|\theta\|_0\le k} \frac{1}{2}\|\theta - w\|^2_A, \]{where} $k\le p$ is the desired sparsity level. The core contribution of this paper is to prove that for any $\bar k$-sparse vector $\bar w$ with $\bar k < k$, the estimation error $\|\mathcal{H}_{A,k}(w) - \bar w\|_A$ satisfies \[ \|\mathcal{H}_{A,k}(w) - \bar w\|^2_A \le \left(1+ \mathcal{O}\left(\kappa(A,2k) \sqrt{\frac{\bar k }{k - \bar k}}\right)\right) \|{w} - \bar w\|^2_A, \]{where} $\kappa(A,2k)$ is the restricted strong condition number of $A$ over $(2k)$-sparse subspace. This estimation error bound is nearly non-expansive when $k$ is sufficiently larger than $\bar k$. Specially when $A$ is the identity matrix such that $\kappa(A,2k)\equiv1$, our bound recovers the previously known nearly non-expansive bounds for Euclidean hard thresholding operator. We further show that such a bound extends to an approximate version of $\mathcal{H}_{A,k}(w)$ estimated by Hard Thresholding Pursuit (HTP) algorithm. We demonstrate the applicability of these bounds to the mean squared error analysis of HTP and its novel extension based on preconditioning method. Numerical evidence is provided to support our theory and demonstrate the superiority of the proposed preconditioning HTP algorithm.
APA
Yuan, X. & Li, P.. (2020). Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:3787-3813 Available from https://proceedings.mlr.press/v125/yuan20a.html.

Related Material