[edit]

# A proximal Newton framework for composite minimization: Graph learning without Cholesky decompositions and matrix inversions

*Proceedings of the 30th International Conference on Machine Learning*, PMLR 28(2):271-279, 2013.

#### Abstract

We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a self-concordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic step-size selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations.