BINAS: Bilinear Interpretable Neural Architecture Search

Niv Nayman, Yonathan Aflalo, Asaf Noy, Lihi Zelnik-Manor
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:786-801, 2023.

Abstract

Realistic use of neural networks often requires adhering to multiple constraints on latency, energy and memory among others. A popular approach to find fitting networks is through constrained Neural Architecture Search (NAS). However, previous methods use complicated predictors for the accuracy of the network. Those predictors are hard to interpret and sensitive to many hyperparameters to be tuned, hence, the resulting accuracy of the generated models is often harmed. In this work we resolve this by introducing Bilinear Interpretable Neural Architecture Search (BINAS), that is based on an accurate and simple bilinear formulation of both an accuracy estimator and the expected resource requirement, together with a scalable search method with theoretical guarantees. The simplicity of our proposed estimator together with the intuitive way it is constructed bring interpretability through many insights about the contribution of different design choices. For example, we find that in the examined search space, adding depth and width is more effective at deeper stages of the network and at the beginning of each resolution stage. Our experiments show that BINAS generates comparable to or better architectures than other state-of-the-art NAS methods within a reduced search cost for each additional generated network, while strictly satisfying the resource constraints.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-nayman23a, title = {BINAS: Bilinear Interpretable Neural Architecture Search}, author = {Nayman, Niv and Aflalo, Yonathan and Noy, Asaf and Zelnik-Manor, Lihi}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {786--801}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/nayman23a/nayman23a.pdf}, url = {https://proceedings.mlr.press/v189/nayman23a.html}, abstract = {Realistic use of neural networks often requires adhering to multiple constraints on latency, energy and memory among others. A popular approach to find fitting networks is through constrained Neural Architecture Search (NAS). However, previous methods use complicated predictors for the accuracy of the network. Those predictors are hard to interpret and sensitive to many hyperparameters to be tuned, hence, the resulting accuracy of the generated models is often harmed. In this work we resolve this by introducing Bilinear Interpretable Neural Architecture Search (BINAS), that is based on an accurate and simple bilinear formulation of both an accuracy estimator and the expected resource requirement, together with a scalable search method with theoretical guarantees. The simplicity of our proposed estimator together with the intuitive way it is constructed bring interpretability through many insights about the contribution of different design choices. For example, we find that in the examined search space, adding depth and width is more effective at deeper stages of the network and at the beginning of each resolution stage. Our experiments show that BINAS generates comparable to or better architectures than other state-of-the-art NAS methods within a reduced search cost for each additional generated network, while strictly satisfying the resource constraints.} }
Endnote
%0 Conference Paper %T BINAS: Bilinear Interpretable Neural Architecture Search %A Niv Nayman %A Yonathan Aflalo %A Asaf Noy %A Lihi Zelnik-Manor %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-nayman23a %I PMLR %P 786--801 %U https://proceedings.mlr.press/v189/nayman23a.html %V 189 %X Realistic use of neural networks often requires adhering to multiple constraints on latency, energy and memory among others. A popular approach to find fitting networks is through constrained Neural Architecture Search (NAS). However, previous methods use complicated predictors for the accuracy of the network. Those predictors are hard to interpret and sensitive to many hyperparameters to be tuned, hence, the resulting accuracy of the generated models is often harmed. In this work we resolve this by introducing Bilinear Interpretable Neural Architecture Search (BINAS), that is based on an accurate and simple bilinear formulation of both an accuracy estimator and the expected resource requirement, together with a scalable search method with theoretical guarantees. The simplicity of our proposed estimator together with the intuitive way it is constructed bring interpretability through many insights about the contribution of different design choices. For example, we find that in the examined search space, adding depth and width is more effective at deeper stages of the network and at the beginning of each resolution stage. Our experiments show that BINAS generates comparable to or better architectures than other state-of-the-art NAS methods within a reduced search cost for each additional generated network, while strictly satisfying the resource constraints.
APA
Nayman, N., Aflalo, Y., Noy, A. & Zelnik-Manor, L.. (2023). BINAS: Bilinear Interpretable Neural Architecture Search. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:786-801 Available from https://proceedings.mlr.press/v189/nayman23a.html.

Related Material