Exploiting Network Compressibility and Topology in Zero-Cost NAS

Lichuan Xiang, Rosco Hunter, Minghao Xu, Łukasz Dudziak, Hongkai Wen
Proceedings of the Second International Conference on Automated Machine Learning, PMLR 224:18/1-14, 2023.

Abstract

Neural Architecture Search (NAS) has been widely used to discover high-performance neural network architectures over manually designed approaches. Despite their success, current NAS approaches often require extensive evaluation of many candidate architectures in the search space or training of large super networks. To reduce the search cost, recently proposed zero-cost proxies are utilized to efficiently predict the performance of an architecture. However, while many new proxies have been proposed in recent years, relatively little attention has been dedicated to pushing our understanding of the existing ones, with their mutual effects on each other being a particularly – but not entirely – overlooked topic. Contrary to that trend, in our work, we argue that it is worth revisiting and analysing the existing proxies in order to further push the boundaries of zero-cost NAS. Towards that goal, we propose to view the existing proxies through a common lens of network compressibility, trainability, and expressivity, as discussed in pruning literature. Notably, doing so allows us to build a better understanding of the high-level relationship between different proxies as well as refine some of them into their more informative variants. We leverage these insights to design a novel saliency and metric aggregation method informed by compressibility, orthogonality and network topology. We show that our proposed methods are simple but powerful and yield some state-of-the-art results across popular NAS benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v224-xiang23a, title = {Exploiting Network Compressibility and Topology in Zero-Cost NAS}, author = {Xiang, Lichuan and Hunter, Rosco and Xu, Minghao and Dudziak, {\L}ukasz and Wen, Hongkai}, booktitle = {Proceedings of the Second International Conference on Automated Machine Learning}, pages = {18/1--14}, year = {2023}, editor = {Faust, Aleksandra and Garnett, Roman and White, Colin and Hutter, Frank and Gardner, Jacob R.}, volume = {224}, series = {Proceedings of Machine Learning Research}, month = {12--15 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v224/xiang23a/xiang23a.pdf}, url = {https://proceedings.mlr.press/v224/xiang23a.html}, abstract = {Neural Architecture Search (NAS) has been widely used to discover high-performance neural network architectures over manually designed approaches. Despite their success, current NAS approaches often require extensive evaluation of many candidate architectures in the search space or training of large super networks. To reduce the search cost, recently proposed zero-cost proxies are utilized to efficiently predict the performance of an architecture. However, while many new proxies have been proposed in recent years, relatively little attention has been dedicated to pushing our understanding of the existing ones, with their mutual effects on each other being a particularly – but not entirely – overlooked topic. Contrary to that trend, in our work, we argue that it is worth revisiting and analysing the existing proxies in order to further push the boundaries of zero-cost NAS. Towards that goal, we propose to view the existing proxies through a common lens of network compressibility, trainability, and expressivity, as discussed in pruning literature. Notably, doing so allows us to build a better understanding of the high-level relationship between different proxies as well as refine some of them into their more informative variants. We leverage these insights to design a novel saliency and metric aggregation method informed by compressibility, orthogonality and network topology. We show that our proposed methods are simple but powerful and yield some state-of-the-art results across popular NAS benchmarks.} }
Endnote
%0 Conference Paper %T Exploiting Network Compressibility and Topology in Zero-Cost NAS %A Lichuan Xiang %A Rosco Hunter %A Minghao Xu %A Łukasz Dudziak %A Hongkai Wen %B Proceedings of the Second International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Aleksandra Faust %E Roman Garnett %E Colin White %E Frank Hutter %E Jacob R. Gardner %F pmlr-v224-xiang23a %I PMLR %P 18/1--14 %U https://proceedings.mlr.press/v224/xiang23a.html %V 224 %X Neural Architecture Search (NAS) has been widely used to discover high-performance neural network architectures over manually designed approaches. Despite their success, current NAS approaches often require extensive evaluation of many candidate architectures in the search space or training of large super networks. To reduce the search cost, recently proposed zero-cost proxies are utilized to efficiently predict the performance of an architecture. However, while many new proxies have been proposed in recent years, relatively little attention has been dedicated to pushing our understanding of the existing ones, with their mutual effects on each other being a particularly – but not entirely – overlooked topic. Contrary to that trend, in our work, we argue that it is worth revisiting and analysing the existing proxies in order to further push the boundaries of zero-cost NAS. Towards that goal, we propose to view the existing proxies through a common lens of network compressibility, trainability, and expressivity, as discussed in pruning literature. Notably, doing so allows us to build a better understanding of the high-level relationship between different proxies as well as refine some of them into their more informative variants. We leverage these insights to design a novel saliency and metric aggregation method informed by compressibility, orthogonality and network topology. We show that our proposed methods are simple but powerful and yield some state-of-the-art results across popular NAS benchmarks.
APA
Xiang, L., Hunter, R., Xu, M., Dudziak, Ł. & Wen, H.. (2023). Exploiting Network Compressibility and Topology in Zero-Cost NAS. Proceedings of the Second International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 224:18/1-14 Available from https://proceedings.mlr.press/v224/xiang23a.html.

Related Material