Parallel Majorization Minimization with Dynamically Restricted Domains for Nonconvex Optimization

Yan Kaganovsky, Ikenna Odinaka, David Carlson, Lawrence Carin
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1497-1505, 2016.

Abstract

We propose an optimization framework for nonconvex problems based on majorization-minimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then broken into simpler and parallel tasks, while guaranteeing the monotonic reduction of the original objective function and convergence to a local minimum. This framework also allows one to restrict the upper bound to a local dynamic convex domain, so that the bound is better matched to the local curvature of the objective function, resulting in accelerated convergence. We test the proposed framework on a nonconvex support vector machine based on a sigmoid loss function and on nonconvex penalized logistic regression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-kaganovsky16, title = {Parallel Majorization Minimization with Dynamically Restricted Domains for Nonconvex Optimization}, author = {Kaganovsky, Yan and Odinaka, Ikenna and Carlson, David and Carin, Lawrence}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {1497--1505}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/kaganovsky16.pdf}, url = {https://proceedings.mlr.press/v51/kaganovsky16.html}, abstract = {We propose an optimization framework for nonconvex problems based on majorization-minimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then broken into simpler and parallel tasks, while guaranteeing the monotonic reduction of the original objective function and convergence to a local minimum. This framework also allows one to restrict the upper bound to a local dynamic convex domain, so that the bound is better matched to the local curvature of the objective function, resulting in accelerated convergence. We test the proposed framework on a nonconvex support vector machine based on a sigmoid loss function and on nonconvex penalized logistic regression.} }
Endnote
%0 Conference Paper %T Parallel Majorization Minimization with Dynamically Restricted Domains for Nonconvex Optimization %A Yan Kaganovsky %A Ikenna Odinaka %A David Carlson %A Lawrence Carin %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-kaganovsky16 %I PMLR %P 1497--1505 %U https://proceedings.mlr.press/v51/kaganovsky16.html %V 51 %X We propose an optimization framework for nonconvex problems based on majorization-minimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then broken into simpler and parallel tasks, while guaranteeing the monotonic reduction of the original objective function and convergence to a local minimum. This framework also allows one to restrict the upper bound to a local dynamic convex domain, so that the bound is better matched to the local curvature of the objective function, resulting in accelerated convergence. We test the proposed framework on a nonconvex support vector machine based on a sigmoid loss function and on nonconvex penalized logistic regression.
RIS
TY - CPAPER TI - Parallel Majorization Minimization with Dynamically Restricted Domains for Nonconvex Optimization AU - Yan Kaganovsky AU - Ikenna Odinaka AU - David Carlson AU - Lawrence Carin BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-kaganovsky16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 1497 EP - 1505 L1 - http://proceedings.mlr.press/v51/kaganovsky16.pdf UR - https://proceedings.mlr.press/v51/kaganovsky16.html AB - We propose an optimization framework for nonconvex problems based on majorization-minimization that is particularity well-suited for parallel computing. It reduces the optimization of a high dimensional nonconvex objective function to successive optimizations of locally tight and convex upper bounds which are additively separable into low dimensional objectives. The original problem is then broken into simpler and parallel tasks, while guaranteeing the monotonic reduction of the original objective function and convergence to a local minimum. This framework also allows one to restrict the upper bound to a local dynamic convex domain, so that the bound is better matched to the local curvature of the objective function, resulting in accelerated convergence. We test the proposed framework on a nonconvex support vector machine based on a sigmoid loss function and on nonconvex penalized logistic regression. ER -
APA
Kaganovsky, Y., Odinaka, I., Carlson, D. & Carin, L.. (2016). Parallel Majorization Minimization with Dynamically Restricted Domains for Nonconvex Optimization. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:1497-1505 Available from https://proceedings.mlr.press/v51/kaganovsky16.html.

Related Material