Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Akhiad Bercovich, Tomer Ronen, Talor Abramovich, Nir Ailon, Nave Assaf, Mohammed Dabbah, Ido Galil, Amnon Geifman, Yonatan Geifman, Izhak Golan, Netanel Haber, Ehud Dov Karpas, Roi Koren, Itay Levy, Pavlo Molchanov, Shahar Mor, Zach Moshe, Najeeb Nabwani, Omri Puny, Ran Rubin, Itamar Schen, Ido Shahaf, Oren Tropp, Omer Ullman Argov, Ran Zilberstein, Ran El-Yaniv
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:3806-3830, 2025.

Abstract

Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption. While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities. Using neural architecture search (NAS) at a large-scale, Puzzle optimizes models with tens of billions of parameters. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We showcase our framework’s impact via Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B) and Llama-3.3-Nemotron-49B, two publicly available models derived from Llama-70B-Instruct. Both models achieve a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while retaining 98.4% of the original model’s benchmark accuracies. These are the most accurate models supporting single H100 GPU inference with large batch sizes, despite training on 45B tokens at most, far fewer than the 15T used to train Llama-70B. Lastly, we show that lightweight alignment on these derived models allows them to surpass the parent model in specific capabilities. Our work establishes that powerful LLM models can be optimized for efficient deployment with only negligible loss in quality, underscoring that inference performance, not parameter count alone, should guide model selection.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-bercovich25a, title = {Puzzle: Distillation-Based {NAS} for Inference-Optimized {LLM}s}, author = {Bercovich, Akhiad and Ronen, Tomer and Abramovich, Talor and Ailon, Nir and Assaf, Nave and Dabbah, Mohammed and Galil, Ido and Geifman, Amnon and Geifman, Yonatan and Golan, Izhak and Haber, Netanel and Karpas, Ehud Dov and Koren, Roi and Levy, Itay and Molchanov, Pavlo and Mor, Shahar and Moshe, Zach and Nabwani, Najeeb and Puny, Omri and Rubin, Ran and Schen, Itamar and Shahaf, Ido and Tropp, Oren and Argov, Omer Ullman and Zilberstein, Ran and El-Yaniv, Ran}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {3806--3830}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/bercovich25a/bercovich25a.pdf}, url = {https://proceedings.mlr.press/v267/bercovich25a.html}, abstract = {Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption. While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities. Using neural architecture search (NAS) at a large-scale, Puzzle optimizes models with tens of billions of parameters. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We showcase our framework’s impact via Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B) and Llama-3.3-Nemotron-49B, two publicly available models derived from Llama-70B-Instruct. Both models achieve a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while retaining 98.4% of the original model’s benchmark accuracies. These are the most accurate models supporting single H100 GPU inference with large batch sizes, despite training on 45B tokens at most, far fewer than the 15T used to train Llama-70B. Lastly, we show that lightweight alignment on these derived models allows them to surpass the parent model in specific capabilities. Our work establishes that powerful LLM models can be optimized for efficient deployment with only negligible loss in quality, underscoring that inference performance, not parameter count alone, should guide model selection.} }
Endnote
%0 Conference Paper %T Puzzle: Distillation-Based NAS for Inference-Optimized LLMs %A Akhiad Bercovich %A Tomer Ronen %A Talor Abramovich %A Nir Ailon %A Nave Assaf %A Mohammed Dabbah %A Ido Galil %A Amnon Geifman %A Yonatan Geifman %A Izhak Golan %A Netanel Haber %A Ehud Dov Karpas %A Roi Koren %A Itay Levy %A Pavlo Molchanov %A Shahar Mor %A Zach Moshe %A Najeeb Nabwani %A Omri Puny %A Ran Rubin %A Itamar Schen %A Ido Shahaf %A Oren Tropp %A Omer Ullman Argov %A Ran Zilberstein %A Ran El-Yaniv %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-bercovich25a %I PMLR %P 3806--3830 %U https://proceedings.mlr.press/v267/bercovich25a.html %V 267 %X Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption. While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities. Using neural architecture search (NAS) at a large-scale, Puzzle optimizes models with tens of billions of parameters. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We showcase our framework’s impact via Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B) and Llama-3.3-Nemotron-49B, two publicly available models derived from Llama-70B-Instruct. Both models achieve a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while retaining 98.4% of the original model’s benchmark accuracies. These are the most accurate models supporting single H100 GPU inference with large batch sizes, despite training on 45B tokens at most, far fewer than the 15T used to train Llama-70B. Lastly, we show that lightweight alignment on these derived models allows them to surpass the parent model in specific capabilities. Our work establishes that powerful LLM models can be optimized for efficient deployment with only negligible loss in quality, underscoring that inference performance, not parameter count alone, should guide model selection.
APA
Bercovich, A., Ronen, T., Abramovich, T., Ailon, N., Assaf, N., Dabbah, M., Galil, I., Geifman, A., Geifman, Y., Golan, I., Haber, N., Karpas, E.D., Koren, R., Levy, I., Molchanov, P., Mor, S., Moshe, Z., Nabwani, N., Puny, O., Rubin, R., Schen, I., Shahaf, I., Tropp, O., Argov, O.U., Zilberstein, R. & El-Yaniv, R.. (2025). Puzzle: Distillation-Based NAS for Inference-Optimized LLMs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:3806-3830 Available from https://proceedings.mlr.press/v267/bercovich25a.html.

Related Material