A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions

Quoc Tran Dinh, Anastasios Kyrillidis, Volkan Cevher
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):271-279, 2013.

Abstract

We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-trandinh13, title = {A proximal {N}ewton framework for composite minimization: Graph learning without {C}holesky decompositions and matrix inversions}, author = {Quoc Tran Dinh and Anastasios Kyrillidis and Volkan Cevher}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {271--279}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/trandinh13.pdf}, url = {http://proceedings.mlr.press/v28/trandinh13.html}, abstract = {We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations. } }
Endnote
%0 Conference Paper %T A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions %A Quoc Tran Dinh %A Anastasios Kyrillidis %A Volkan Cevher %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-trandinh13 %I PMLR %J Proceedings of Machine Learning Research %P 271--279 %U http://proceedings.mlr.press %V 28 %N 2 %W PMLR %X We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.
RIS
TY - CPAPER TI - A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions AU - Quoc Tran Dinh AU - Anastasios Kyrillidis AU - Volkan Cevher BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-trandinh13 PB - PMLR SP - 271 DP - PMLR EP - 279 L1 - http://proceedings.mlr.press/v28/trandinh13.pdf UR - http://proceedings.mlr.press/v28/trandinh13.html AB - We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations. ER -
APA
Tran Dinh, Q., Kyrillidis, A. & Cevher, V.. (2013). A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(2):271-279

Related Material