An explicit analysis of the entropic penalty in linear programming
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:18411855, 2018.
Abstract
Solving linear programs by using entropic penalization has recently attracted new interest in the optimization community, since this strategy forms the basis for the fastestknown algorithms for the optimal transport problem, with many applications in modern largescale machine learning. Crucial to these applications has been an analysis of how quickly solutions to the penalized program approach true optima to the original linear program. More than 20 years ago, Cominetti and San Martín showed that this convergence is exponentially fast; however, their proof is asymptotic and does not give any indication of how accurately the entropic program approximates the original program for any particular choice of the penalization parameter. We close this longstanding gap in the literature regarding entropic penalization by giving a new proof of the exponential convergence, valid for any linear program. Our proof is nonasymptotic, yields explicit constants, and has the virtue of being extremely simple. We provide matching lower bounds and show that the entropic approach does not lead to a nearlinear time approximation scheme for the linear assignment problem.
Related Material


