LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs

Erik Schultheis, Dan Alistarh
Conference on Parsimony and Learning, PMLR 328:265-284, 2026.

Abstract

We present LLMQ, an end-to-end CUDA/C++ implementation for medium-sized language-model training, e.g. 3B to 32B parameters, on affordable, commodity GPUs. These devices are characterized by low memory availability and slow communication compared to datacentre-grade GPUs. Consequently, we showcase a range of optimizations that target these bottlenecks, including activation checkpointing, offloading, and copy-engine based collectives. LLMQ is able to train or fine-tune a 7B model on a single 16GB mid-range gaming card, or a 32B model on a workstation equipped with 4 RTX 4090s. This is achieved while executing a standard 8-bit training pipeline, without additional algorithmic approximations, and maintaining FLOP utilization of around 50%. The efficiency of LLMQ rivals that of production-scale systems on much more expensive cloud-grade GPUs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v328-schultheis26a, title = {LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs}, author = {Schultheis, Erik and Alistarh, Dan}, booktitle = {Conference on Parsimony and Learning}, pages = {265--284}, year = {2026}, editor = {Burkholz, Rebekka and Liu, Shiwei and Ravishankar, Saiprasad and Redman, William and Huang, Wei and Su, Weijie and Zhu, Zhihui}, volume = {328}, series = {Proceedings of Machine Learning Research}, month = {23--26 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v328/main/assets/schultheis26a/schultheis26a.pdf}, url = {https://proceedings.mlr.press/v328/schultheis26a.html}, abstract = {We present LLMQ, an end-to-end CUDA/C++ implementation for medium-sized language-model training, e.g. 3B to 32B parameters, on affordable, commodity GPUs. These devices are characterized by low memory availability and slow communication compared to datacentre-grade GPUs. Consequently, we showcase a range of optimizations that target these bottlenecks, including activation checkpointing, offloading, and copy-engine based collectives. LLMQ is able to train or fine-tune a 7B model on a single 16GB mid-range gaming card, or a 32B model on a workstation equipped with 4 RTX 4090s. This is achieved while executing a standard 8-bit training pipeline, without additional algorithmic approximations, and maintaining FLOP utilization of around 50%. The efficiency of LLMQ rivals that of production-scale systems on much more expensive cloud-grade GPUs.} }
Endnote
%0 Conference Paper %T LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs %A Erik Schultheis %A Dan Alistarh %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2026 %E Rebekka Burkholz %E Shiwei Liu %E Saiprasad Ravishankar %E William Redman %E Wei Huang %E Weijie Su %E Zhihui Zhu %F pmlr-v328-schultheis26a %I PMLR %P 265--284 %U https://proceedings.mlr.press/v328/schultheis26a.html %V 328 %X We present LLMQ, an end-to-end CUDA/C++ implementation for medium-sized language-model training, e.g. 3B to 32B parameters, on affordable, commodity GPUs. These devices are characterized by low memory availability and slow communication compared to datacentre-grade GPUs. Consequently, we showcase a range of optimizations that target these bottlenecks, including activation checkpointing, offloading, and copy-engine based collectives. LLMQ is able to train or fine-tune a 7B model on a single 16GB mid-range gaming card, or a 32B model on a workstation equipped with 4 RTX 4090s. This is achieved while executing a standard 8-bit training pipeline, without additional algorithmic approximations, and maintaining FLOP utilization of around 50%. The efficiency of LLMQ rivals that of production-scale systems on much more expensive cloud-grade GPUs.
APA
Schultheis, E. & Alistarh, D.. (2026). LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 328:265-284 Available from https://proceedings.mlr.press/v328/schultheis26a.html.

Related Material