FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search

Jordan Dotzel, Gang Wu, Andrew Li, Muhammad Umar, Yun Ni, Mohamed S Abdelfattah, Zhiru Zhang, Liqun Cheng, Martin G Dixon, Norman P Jouppi, Quoc V Le, Sheng Li
Proceedings of the Third International Conference on Automated Machine Learning, PMLR 256:6/1-26, 2024.

Abstract

Quantization has become a mainstream compression technique for reducing model size, computational requirements, and energy consumption for modern deep neural networks (DNNs). With improved numerical support in recent hardware, including multiple variants of integer and floating point, mixed-precision quantization has become necessary to achieve high-quality results with low model cost. Prior mixed-precision methods have performed either a post-training quantization search, which compromises on accuracy, or a differentiable quantization search, which leads to high memory usage from branching. Therefore, we propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models. We evaluate our search (FLIQS) on multiple convolutional and vision transformer networks to discover Pareto-optimal models. Our approach improves upon uniform precision, manual mixed-precision, and recent integer quantization search methods. With integer models, we increase the accuracy of ResNet-18 on ImageNet by 1.3% points and ResNet-50 by 0.90% points with equivalent model cost over previous methods. Additionally, for the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98% points compared to prior state-of-the-art FP8 models. Finally, we extend FLIQS to simultaneously search a joint quantization and neural architecture space and improve the ImageNet accuracy by 2.69% points with similar model cost on a MobileNetV2 search space.

Cite this Paper


BibTeX
@InProceedings{pmlr-v256-dotzel24a, title = {FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search}, author = {Dotzel, Jordan and Wu, Gang and Li, Andrew and Umar, Muhammad and Ni, Yun and Abdelfattah, Mohamed S and Zhang, Zhiru and Cheng, Liqun and Dixon, Martin G and Jouppi, Norman P and Le, Quoc V and Li, Sheng}, booktitle = {Proceedings of the Third International Conference on Automated Machine Learning}, pages = {6/1--26}, year = {2024}, editor = {Eggensperger, Katharina and Garnett, Roman and Vanschoren, Joaquin and Lindauer, Marius and Gardner, Jacob R.}, volume = {256}, series = {Proceedings of Machine Learning Research}, month = {09--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v256/main/assets/dotzel24a/dotzel24a.pdf}, url = {https://proceedings.mlr.press/v256/dotzel24a.html}, abstract = {Quantization has become a mainstream compression technique for reducing model size, computational requirements, and energy consumption for modern deep neural networks (DNNs). With improved numerical support in recent hardware, including multiple variants of integer and floating point, mixed-precision quantization has become necessary to achieve high-quality results with low model cost. Prior mixed-precision methods have performed either a post-training quantization search, which compromises on accuracy, or a differentiable quantization search, which leads to high memory usage from branching. Therefore, we propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models. We evaluate our search (FLIQS) on multiple convolutional and vision transformer networks to discover Pareto-optimal models. Our approach improves upon uniform precision, manual mixed-precision, and recent integer quantization search methods. With integer models, we increase the accuracy of ResNet-18 on ImageNet by 1.3% points and ResNet-50 by 0.90% points with equivalent model cost over previous methods. Additionally, for the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98% points compared to prior state-of-the-art FP8 models. Finally, we extend FLIQS to simultaneously search a joint quantization and neural architecture space and improve the ImageNet accuracy by 2.69% points with similar model cost on a MobileNetV2 search space.} }
Endnote
%0 Conference Paper %T FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search %A Jordan Dotzel %A Gang Wu %A Andrew Li %A Muhammad Umar %A Yun Ni %A Mohamed S Abdelfattah %A Zhiru Zhang %A Liqun Cheng %A Martin G Dixon %A Norman P Jouppi %A Quoc V Le %A Sheng Li %B Proceedings of the Third International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Katharina Eggensperger %E Roman Garnett %E Joaquin Vanschoren %E Marius Lindauer %E Jacob R. Gardner %F pmlr-v256-dotzel24a %I PMLR %P 6/1--26 %U https://proceedings.mlr.press/v256/dotzel24a.html %V 256 %X Quantization has become a mainstream compression technique for reducing model size, computational requirements, and energy consumption for modern deep neural networks (DNNs). With improved numerical support in recent hardware, including multiple variants of integer and floating point, mixed-precision quantization has become necessary to achieve high-quality results with low model cost. Prior mixed-precision methods have performed either a post-training quantization search, which compromises on accuracy, or a differentiable quantization search, which leads to high memory usage from branching. Therefore, we propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models. We evaluate our search (FLIQS) on multiple convolutional and vision transformer networks to discover Pareto-optimal models. Our approach improves upon uniform precision, manual mixed-precision, and recent integer quantization search methods. With integer models, we increase the accuracy of ResNet-18 on ImageNet by 1.3% points and ResNet-50 by 0.90% points with equivalent model cost over previous methods. Additionally, for the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98% points compared to prior state-of-the-art FP8 models. Finally, we extend FLIQS to simultaneously search a joint quantization and neural architecture space and improve the ImageNet accuracy by 2.69% points with similar model cost on a MobileNetV2 search space.
APA
Dotzel, J., Wu, G., Li, A., Umar, M., Ni, Y., Abdelfattah, M.S., Zhang, Z., Cheng, L., Dixon, M.G., Jouppi, N.P., Le, Q.V. & Li, S.. (2024). FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search. Proceedings of the Third International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 256:6/1-26 Available from https://proceedings.mlr.press/v256/dotzel24a.html.

Related Material