PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search

Haibin Wang, Ce Ge, Hesen Chen, Xiuyu Sun
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:35642-35654, 2023.

Abstract

The wide application of pre-trained models is driving the trend of once-for-all training in one-shot neural architecture search (NAS). However, training within a huge sample space damages the performance of individual subnets and requires much computation to search for a optimal model. In this paper, we present PreNAS, a search-free NAS approach that accentuates target models in one-shot training. Specifically, the sample space is dramatically reduced in advance by a zero-cost selector, and weight-sharing one-shot training is performed on the preferred architectures to alleviate update conflicts. Extensive experiments have demonstrated that PreNAS consistently outperforms state-of-the-art one-shot NAS competitors for both Vision Transformer and convolutional architectures, and importantly, enables instant specialization with zero search cost. Our code is available at https://github.com/tinyvision/PreNAS.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-wang23f, title = {{P}re{NAS}: Preferred One-Shot Learning Towards Efficient Neural Architecture Search}, author = {Wang, Haibin and Ge, Ce and Chen, Hesen and Sun, Xiuyu}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {35642--35654}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/wang23f/wang23f.pdf}, url = {https://proceedings.mlr.press/v202/wang23f.html}, abstract = {The wide application of pre-trained models is driving the trend of once-for-all training in one-shot neural architecture search (NAS). However, training within a huge sample space damages the performance of individual subnets and requires much computation to search for a optimal model. In this paper, we present PreNAS, a search-free NAS approach that accentuates target models in one-shot training. Specifically, the sample space is dramatically reduced in advance by a zero-cost selector, and weight-sharing one-shot training is performed on the preferred architectures to alleviate update conflicts. Extensive experiments have demonstrated that PreNAS consistently outperforms state-of-the-art one-shot NAS competitors for both Vision Transformer and convolutional architectures, and importantly, enables instant specialization with zero search cost. Our code is available at https://github.com/tinyvision/PreNAS.} }
Endnote
%0 Conference Paper %T PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search %A Haibin Wang %A Ce Ge %A Hesen Chen %A Xiuyu Sun %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-wang23f %I PMLR %P 35642--35654 %U https://proceedings.mlr.press/v202/wang23f.html %V 202 %X The wide application of pre-trained models is driving the trend of once-for-all training in one-shot neural architecture search (NAS). However, training within a huge sample space damages the performance of individual subnets and requires much computation to search for a optimal model. In this paper, we present PreNAS, a search-free NAS approach that accentuates target models in one-shot training. Specifically, the sample space is dramatically reduced in advance by a zero-cost selector, and weight-sharing one-shot training is performed on the preferred architectures to alleviate update conflicts. Extensive experiments have demonstrated that PreNAS consistently outperforms state-of-the-art one-shot NAS competitors for both Vision Transformer and convolutional architectures, and importantly, enables instant specialization with zero search cost. Our code is available at https://github.com/tinyvision/PreNAS.
APA
Wang, H., Ge, C., Chen, H. & Sun, X.. (2023). PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:35642-35654 Available from https://proceedings.mlr.press/v202/wang23f.html.

Related Material