Automated Super-Network Generation for Scalable Neural Architecture Search

Juan Pablo Munoz, Nikolay Lyalyushkin, Chaunte Willetta Lacewell, Anastasia Senina, Daniel Cummings, Anthony Sarah, Alexander Kozlov, Nilesh Jain
Proceedings of the First International Conference on Automated Machine Learning, PMLR 188:5/1-15, 2022.

Abstract

Weight-sharing Neural Architecture Search (NAS) solutions often discover neural network architectures that outperform their human-crafted counterparts. Weight-sharing allows the creation and training of super-networks that contain many smaller and more efficient child models, a.k.a., sub-networks. For an average deep learning practitioner, generating and training one of these super-networks for an arbitrary neural network architecture design space can be a daunting experience. In this paper, we present BootstrapNAS, a software framework that addresses this challenge by automating the generation and training of super-networks. Developers can use this solution to convert a pre-trained model into a super-network. BootstrapNAS then trains the super-network using a weight-sharing NAS technique available in the framework or provided by the user. Finally, a search component discovers high-performing sub-networks that are returned to the end-user. We demonstrate BootstrapNAS by automatically generating super-networks from popular pre-trained models (MobileNetV2, MobileNetV3, EfficientNet, ResNet50 and HyperSeg), available from Torchvision and other repositories. BootstrapNAS can achieve up to 9.87{\texttimes} improvement in throughput in comparison to the pre-trained Torchvision ResNet-50 (FP32) on Intel Xeon platform.

Cite this Paper


BibTeX
@InProceedings{pmlr-v188-munoz22a, title = {Automated Super-Network Generation for Scalable Neural Architecture Search}, author = {Munoz, Juan Pablo and Lyalyushkin, Nikolay and Lacewell, Chaunte Willetta and Senina, Anastasia and Cummings, Daniel and Sarah, Anthony and Kozlov, Alexander and Jain, Nilesh}, booktitle = {Proceedings of the First International Conference on Automated Machine Learning}, pages = {5/1--15}, year = {2022}, editor = {Guyon, Isabelle and Lindauer, Marius and van der Schaar, Mihaela and Hutter, Frank and Garnett, Roman}, volume = {188}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v188/munoz22a/munoz22a.pdf}, url = {https://proceedings.mlr.press/v188/munoz22a.html}, abstract = {Weight-sharing Neural Architecture Search (NAS) solutions often discover neural network architectures that outperform their human-crafted counterparts. Weight-sharing allows the creation and training of super-networks that contain many smaller and more efficient child models, a.k.a., sub-networks. For an average deep learning practitioner, generating and training one of these super-networks for an arbitrary neural network architecture design space can be a daunting experience. In this paper, we present BootstrapNAS, a software framework that addresses this challenge by automating the generation and training of super-networks. Developers can use this solution to convert a pre-trained model into a super-network. BootstrapNAS then trains the super-network using a weight-sharing NAS technique available in the framework or provided by the user. Finally, a search component discovers high-performing sub-networks that are returned to the end-user. We demonstrate BootstrapNAS by automatically generating super-networks from popular pre-trained models (MobileNetV2, MobileNetV3, EfficientNet, ResNet50 and HyperSeg), available from Torchvision and other repositories. BootstrapNAS can achieve up to 9.87{\texttimes} improvement in throughput in comparison to the pre-trained Torchvision ResNet-50 (FP32) on Intel Xeon platform.} }
Endnote
%0 Conference Paper %T Automated Super-Network Generation for Scalable Neural Architecture Search %A Juan Pablo Munoz %A Nikolay Lyalyushkin %A Chaunte Willetta Lacewell %A Anastasia Senina %A Daniel Cummings %A Anthony Sarah %A Alexander Kozlov %A Nilesh Jain %B Proceedings of the First International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Isabelle Guyon %E Marius Lindauer %E Mihaela van der Schaar %E Frank Hutter %E Roman Garnett %F pmlr-v188-munoz22a %I PMLR %P 5/1--15 %U https://proceedings.mlr.press/v188/munoz22a.html %V 188 %X Weight-sharing Neural Architecture Search (NAS) solutions often discover neural network architectures that outperform their human-crafted counterparts. Weight-sharing allows the creation and training of super-networks that contain many smaller and more efficient child models, a.k.a., sub-networks. For an average deep learning practitioner, generating and training one of these super-networks for an arbitrary neural network architecture design space can be a daunting experience. In this paper, we present BootstrapNAS, a software framework that addresses this challenge by automating the generation and training of super-networks. Developers can use this solution to convert a pre-trained model into a super-network. BootstrapNAS then trains the super-network using a weight-sharing NAS technique available in the framework or provided by the user. Finally, a search component discovers high-performing sub-networks that are returned to the end-user. We demonstrate BootstrapNAS by automatically generating super-networks from popular pre-trained models (MobileNetV2, MobileNetV3, EfficientNet, ResNet50 and HyperSeg), available from Torchvision and other repositories. BootstrapNAS can achieve up to 9.87{\texttimes} improvement in throughput in comparison to the pre-trained Torchvision ResNet-50 (FP32) on Intel Xeon platform.
APA
Munoz, J.P., Lyalyushkin, N., Lacewell, C.W., Senina, A., Cummings, D., Sarah, A., Kozlov, A. & Jain, N.. (2022). Automated Super-Network Generation for Scalable Neural Architecture Search. Proceedings of the First International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 188:5/1-15 Available from https://proceedings.mlr.press/v188/munoz22a.html.

Related Material