K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets

Xiu Su, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, Changshui Zhang, Chang Xu
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:9880-9890, 2021.

Abstract

In one-shot weight sharing for NAS, the weights of each operation (at each layer) are supposed to be identical for all architectures (paths) in the supernet. However, this rules out the possibility of adjusting operation weights to cater for different paths, which limits the reliability of the evaluation results. In this paper, instead of counting on a single supernet, we introduce $K$-shot supernets and take their weights for each operation as a dictionary. The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code. This enables a matrix approximation of the stand-alone weight matrix with a higher rank ($K>1$). A \textit{simplex-net} is introduced to produce architecture-customized code for each path. As a result, all paths can adaptively learn how to share weights in the $K$-shot supernets and acquire corresponding weights for better evaluation. $K$-shot supernets and simplex-net can be iteratively trained, and we further extend the search to the channel dimension. Extensive experiments on benchmark datasets validate that K-shot NAS significantly improves the evaluation accuracy of paths and thus brings in impressive performance improvements.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-su21a, title = {K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets}, author = {Su, Xiu and You, Shan and Zheng, Mingkai and Wang, Fei and Qian, Chen and Zhang, Changshui and Xu, Chang}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {9880--9890}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/su21a/su21a.pdf}, url = {https://proceedings.mlr.press/v139/su21a.html}, abstract = {In one-shot weight sharing for NAS, the weights of each operation (at each layer) are supposed to be identical for all architectures (paths) in the supernet. However, this rules out the possibility of adjusting operation weights to cater for different paths, which limits the reliability of the evaluation results. In this paper, instead of counting on a single supernet, we introduce $K$-shot supernets and take their weights for each operation as a dictionary. The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code. This enables a matrix approximation of the stand-alone weight matrix with a higher rank ($K>1$). A \textit{simplex-net} is introduced to produce architecture-customized code for each path. As a result, all paths can adaptively learn how to share weights in the $K$-shot supernets and acquire corresponding weights for better evaluation. $K$-shot supernets and simplex-net can be iteratively trained, and we further extend the search to the channel dimension. Extensive experiments on benchmark datasets validate that K-shot NAS significantly improves the evaluation accuracy of paths and thus brings in impressive performance improvements.} }
Endnote
%0 Conference Paper %T K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets %A Xiu Su %A Shan You %A Mingkai Zheng %A Fei Wang %A Chen Qian %A Changshui Zhang %A Chang Xu %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-su21a %I PMLR %P 9880--9890 %U https://proceedings.mlr.press/v139/su21a.html %V 139 %X In one-shot weight sharing for NAS, the weights of each operation (at each layer) are supposed to be identical for all architectures (paths) in the supernet. However, this rules out the possibility of adjusting operation weights to cater for different paths, which limits the reliability of the evaluation results. In this paper, instead of counting on a single supernet, we introduce $K$-shot supernets and take their weights for each operation as a dictionary. The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code. This enables a matrix approximation of the stand-alone weight matrix with a higher rank ($K>1$). A \textit{simplex-net} is introduced to produce architecture-customized code for each path. As a result, all paths can adaptively learn how to share weights in the $K$-shot supernets and acquire corresponding weights for better evaluation. $K$-shot supernets and simplex-net can be iteratively trained, and we further extend the search to the channel dimension. Extensive experiments on benchmark datasets validate that K-shot NAS significantly improves the evaluation accuracy of paths and thus brings in impressive performance improvements.
APA
Su, X., You, S., Zheng, M., Wang, F., Qian, C., Zhang, C. & Xu, C.. (2021). K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:9880-9890 Available from https://proceedings.mlr.press/v139/su21a.html.

Related Material