ASAP: Architecture Search, Anneal and Prune

Asaf Noy, Niv Nayman, Tal Ridnik, Nadav Zamir, Sivan Doveh, Itamar Friedman, Raja Giryes, Lihi Zelnik
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:493-503, 2020.

Abstract

Automatic methods for Neural ArchitectureSearch (NAS) have been shown to produce state-of-the-art network models, yet, their main drawback is the computational complexity of the search process. As some primal methods optimized over a discrete search space, thousands of days of GPU were required for convergence. A recent approach is based on constructing a differentiable search space that enables gradient-based optimization, thus reducing the search time to a few days. While successful, such methods still include some incontinuous steps, e.g., the pruning of many weak connections at once. In this paper, we propose a differentiable search space that allows the annealing of architecture weights, while gradually pruning inferior operations, thus the search converges to a single output network in a continuous manner. Experiments on several vision datasets demonstrate the effectiveness of our method with respect to the search cost, accuracy and the memory footprint of the achieved model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-noy20a, title = {ASAP: Architecture Search, Anneal and Prune}, author = {Noy, Asaf and Nayman, Niv and Ridnik, Tal and Zamir, Nadav and Doveh, Sivan and Friedman, Itamar and Giryes, Raja and Zelnik, Lihi}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {493--503}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/noy20a/noy20a.pdf}, url = {https://proceedings.mlr.press/v108/noy20a.html}, abstract = {Automatic methods for Neural ArchitectureSearch (NAS) have been shown to produce state-of-the-art network models, yet, their main drawback is the computational complexity of the search process. As some primal methods optimized over a discrete search space, thousands of days of GPU were required for convergence. A recent approach is based on constructing a differentiable search space that enables gradient-based optimization, thus reducing the search time to a few days. While successful, such methods still include some incontinuous steps, e.g., the pruning of many weak connections at once. In this paper, we propose a differentiable search space that allows the annealing of architecture weights, while gradually pruning inferior operations, thus the search converges to a single output network in a continuous manner. Experiments on several vision datasets demonstrate the effectiveness of our method with respect to the search cost, accuracy and the memory footprint of the achieved model.} }
Endnote
%0 Conference Paper %T ASAP: Architecture Search, Anneal and Prune %A Asaf Noy %A Niv Nayman %A Tal Ridnik %A Nadav Zamir %A Sivan Doveh %A Itamar Friedman %A Raja Giryes %A Lihi Zelnik %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-noy20a %I PMLR %P 493--503 %U https://proceedings.mlr.press/v108/noy20a.html %V 108 %X Automatic methods for Neural ArchitectureSearch (NAS) have been shown to produce state-of-the-art network models, yet, their main drawback is the computational complexity of the search process. As some primal methods optimized over a discrete search space, thousands of days of GPU were required for convergence. A recent approach is based on constructing a differentiable search space that enables gradient-based optimization, thus reducing the search time to a few days. While successful, such methods still include some incontinuous steps, e.g., the pruning of many weak connections at once. In this paper, we propose a differentiable search space that allows the annealing of architecture weights, while gradually pruning inferior operations, thus the search converges to a single output network in a continuous manner. Experiments on several vision datasets demonstrate the effectiveness of our method with respect to the search cost, accuracy and the memory footprint of the achieved model.
APA
Noy, A., Nayman, N., Ridnik, T., Zamir, N., Doveh, S., Friedman, I., Giryes, R. & Zelnik, L.. (2020). ASAP: Architecture Search, Anneal and Prune. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:493-503 Available from https://proceedings.mlr.press/v108/noy20a.html.

Related Material