A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions

Quoc Tran Dinh, Anastasios Kyrillidis, Volkan Cevher
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):271-279, 2013.

Abstract

We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-trandinh13, title = {A proximal {N}ewton framework for composite minimization: Graph learning without {C}holesky decompositions and matrix inversions}, author = {Tran Dinh, Quoc and Kyrillidis, Anastasios and Cevher, Volkan}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {271--279}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/trandinh13.pdf}, url = {https://proceedings.mlr.press/v28/trandinh13.html}, abstract = {We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations. } }
Endnote
%0 Conference Paper %T A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions %A Quoc Tran Dinh %A Anastasios Kyrillidis %A Volkan Cevher %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-trandinh13 %I PMLR %P 271--279 %U https://proceedings.mlr.press/v28/trandinh13.html %V 28 %N 2 %X We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.
RIS
TY - CPAPER TI - A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions AU - Quoc Tran Dinh AU - Anastasios Kyrillidis AU - Volkan Cevher BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-trandinh13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 2 SP - 271 EP - 279 L1 - http://proceedings.mlr.press/v28/trandinh13.pdf UR - https://proceedings.mlr.press/v28/trandinh13.html AB - We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations. ER -
APA
Tran Dinh, Q., Kyrillidis, A. & Cevher, V.. (2013). A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(2):271-279 Available from https://proceedings.mlr.press/v28/trandinh13.html.

Related Material