Generalized Majorization-Minimization

Sobhan Naderi Parizi, Kun He, Reza Aghajani, Stan Sclaroff, Pedro Felzenszwalb
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5022-5031, 2019.

Abstract

Non-convex optimization is ubiquitous in machine learning. Majorization-Minimization (MM) is a powerful iterative procedure for optimizing non-convex functions that works by optimizing a sequence of bounds on the function. In MM, the bound at each iteration is required to touch the objective function at the optimizer of the previous bound. We show that this touching constraint is unnecessary and overly restrictive. We generalize MM by relaxing this constraint, and propose a new optimization framework, named Generalized Majorization-Minimization (G-MM), that is more flexible. For instance, G-MM can incorporate application-specific biases into the optimization procedure without changing the objective function. We derive G-MM algorithms for several latent variable models and show empirically that they consistently outperform their MM counterparts in optimizing non-convex objectives. In particular, G-MM algorithms appear to be less sensitive to initialization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-parizi19a, title = {Generalized Majorization-Minimization}, author = {Parizi, Sobhan Naderi and He, Kun and Aghajani, Reza and Sclaroff, Stan and Felzenszwalb, Pedro}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5022--5031}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/parizi19a/parizi19a.pdf}, url = {https://proceedings.mlr.press/v97/parizi19a.html}, abstract = {Non-convex optimization is ubiquitous in machine learning. Majorization-Minimization (MM) is a powerful iterative procedure for optimizing non-convex functions that works by optimizing a sequence of bounds on the function. In MM, the bound at each iteration is required to touch the objective function at the optimizer of the previous bound. We show that this touching constraint is unnecessary and overly restrictive. We generalize MM by relaxing this constraint, and propose a new optimization framework, named Generalized Majorization-Minimization (G-MM), that is more flexible. For instance, G-MM can incorporate application-specific biases into the optimization procedure without changing the objective function. We derive G-MM algorithms for several latent variable models and show empirically that they consistently outperform their MM counterparts in optimizing non-convex objectives. In particular, G-MM algorithms appear to be less sensitive to initialization.} }
Endnote
%0 Conference Paper %T Generalized Majorization-Minimization %A Sobhan Naderi Parizi %A Kun He %A Reza Aghajani %A Stan Sclaroff %A Pedro Felzenszwalb %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-parizi19a %I PMLR %P 5022--5031 %U https://proceedings.mlr.press/v97/parizi19a.html %V 97 %X Non-convex optimization is ubiquitous in machine learning. Majorization-Minimization (MM) is a powerful iterative procedure for optimizing non-convex functions that works by optimizing a sequence of bounds on the function. In MM, the bound at each iteration is required to touch the objective function at the optimizer of the previous bound. We show that this touching constraint is unnecessary and overly restrictive. We generalize MM by relaxing this constraint, and propose a new optimization framework, named Generalized Majorization-Minimization (G-MM), that is more flexible. For instance, G-MM can incorporate application-specific biases into the optimization procedure without changing the objective function. We derive G-MM algorithms for several latent variable models and show empirically that they consistently outperform their MM counterparts in optimizing non-convex objectives. In particular, G-MM algorithms appear to be less sensitive to initialization.
APA
Parizi, S.N., He, K., Aghajani, R., Sclaroff, S. & Felzenszwalb, P.. (2019). Generalized Majorization-Minimization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5022-5031 Available from https://proceedings.mlr.press/v97/parizi19a.html.

Related Material