Principal Component Hierarchy for Sparse Quadratic Programs

Robbie Vreugdenhil, Viet Anh Nguyen, Armin Eftekhari, Peyman Mohajerin Esfahani
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10607-10616, 2021.

Abstract

We propose a novel approximation hierarchy for cardinality-constrained, convex quadratic programs that exploits the rank-dominating eigenvectors of the quadratic matrix. Each level of approximation admits a min-max characterization whose objective function can be optimized over the binary variables analytically, while preserving convexity in the continuous variables. Exploiting this property, we propose two scalable optimization algorithms, coined as the “best response" and the “dual program", that can efficiently screen the potential indices of the nonzero elements of the original program. We show that the proposed methods are competitive with the existing screening methods in the current sparse regression literature, and it is particularly fast on instances with high number of measurements in experiments with both synthetic and real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-vreugdenhil21a, title = {Principal Component Hierarchy for Sparse Quadratic Programs}, author = {Vreugdenhil, Robbie and Nguyen, Viet Anh and Eftekhari, Armin and Esfahani, Peyman Mohajerin}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10607--10616}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/vreugdenhil21a/vreugdenhil21a.pdf}, url = {https://proceedings.mlr.press/v139/vreugdenhil21a.html}, abstract = {We propose a novel approximation hierarchy for cardinality-constrained, convex quadratic programs that exploits the rank-dominating eigenvectors of the quadratic matrix. Each level of approximation admits a min-max characterization whose objective function can be optimized over the binary variables analytically, while preserving convexity in the continuous variables. Exploiting this property, we propose two scalable optimization algorithms, coined as the “best response" and the “dual program", that can efficiently screen the potential indices of the nonzero elements of the original program. We show that the proposed methods are competitive with the existing screening methods in the current sparse regression literature, and it is particularly fast on instances with high number of measurements in experiments with both synthetic and real datasets.} }
Endnote
%0 Conference Paper %T Principal Component Hierarchy for Sparse Quadratic Programs %A Robbie Vreugdenhil %A Viet Anh Nguyen %A Armin Eftekhari %A Peyman Mohajerin Esfahani %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-vreugdenhil21a %I PMLR %P 10607--10616 %U https://proceedings.mlr.press/v139/vreugdenhil21a.html %V 139 %X We propose a novel approximation hierarchy for cardinality-constrained, convex quadratic programs that exploits the rank-dominating eigenvectors of the quadratic matrix. Each level of approximation admits a min-max characterization whose objective function can be optimized over the binary variables analytically, while preserving convexity in the continuous variables. Exploiting this property, we propose two scalable optimization algorithms, coined as the “best response" and the “dual program", that can efficiently screen the potential indices of the nonzero elements of the original program. We show that the proposed methods are competitive with the existing screening methods in the current sparse regression literature, and it is particularly fast on instances with high number of measurements in experiments with both synthetic and real datasets.
APA
Vreugdenhil, R., Nguyen, V.A., Eftekhari, A. & Esfahani, P.M.. (2021). Principal Component Hierarchy for Sparse Quadratic Programs. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10607-10616 Available from https://proceedings.mlr.press/v139/vreugdenhil21a.html.

Related Material