IHT dies hard: Provable accelerated Iterative Hard Thresholding

Rajiv Khanna, Anastasios Kyrillidis
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:188-198, 2018.

Abstract

We study – both in theory and practice– the use of momentum motions in classic iterative hard thresholding (IHT) methods. By simply modifying plain IHT, we investigate its convergence behavior on convex optimization criteria with non-convex constraints, under standard assumptions. In diverse scenaria, we observe that acceleration in IHT leads to significant improvements, compared to state of the art projected gradient descent and Frank-Wolfe variants. As a byproduct of our inspection, we study the impact of selecting the momentum parameter: similar to convex settings, two modes of behavior are observed –“rippling” and linear– depending on the level of momentum.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-khanna18a, title = {IHT dies hard: Provable accelerated Iterative Hard Thresholding}, author = {Khanna, Rajiv and Kyrillidis, Anastasios}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {188--198}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/khanna18a/khanna18a.pdf}, url = {https://proceedings.mlr.press/v84/khanna18a.html}, abstract = {We study – both in theory and practice– the use of momentum motions in classic iterative hard thresholding (IHT) methods. By simply modifying plain IHT, we investigate its convergence behavior on convex optimization criteria with non-convex constraints, under standard assumptions. In diverse scenaria, we observe that acceleration in IHT leads to significant improvements, compared to state of the art projected gradient descent and Frank-Wolfe variants. As a byproduct of our inspection, we study the impact of selecting the momentum parameter: similar to convex settings, two modes of behavior are observed –“rippling” and linear– depending on the level of momentum.} }
Endnote
%0 Conference Paper %T IHT dies hard: Provable accelerated Iterative Hard Thresholding %A Rajiv Khanna %A Anastasios Kyrillidis %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-khanna18a %I PMLR %P 188--198 %U https://proceedings.mlr.press/v84/khanna18a.html %V 84 %X We study – both in theory and practice– the use of momentum motions in classic iterative hard thresholding (IHT) methods. By simply modifying plain IHT, we investigate its convergence behavior on convex optimization criteria with non-convex constraints, under standard assumptions. In diverse scenaria, we observe that acceleration in IHT leads to significant improvements, compared to state of the art projected gradient descent and Frank-Wolfe variants. As a byproduct of our inspection, we study the impact of selecting the momentum parameter: similar to convex settings, two modes of behavior are observed –“rippling” and linear– depending on the level of momentum.
APA
Khanna, R. & Kyrillidis, A.. (2018). IHT dies hard: Provable accelerated Iterative Hard Thresholding. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:188-198 Available from https://proceedings.mlr.press/v84/khanna18a.html.

Related Material