DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving

Foteini Strati, Sara Mcallister, Amar Phanishayee, Jakub Tarnawski, Ana Klimovic
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:46745-46771, 2024.

Abstract

Distributed LLM serving is costly and often underutilizes hardware accelerators due to three key challenges: bubbles in pipeline-parallel deployments caused by the bimodal latency of prompt and token processing, GPU memory overprovisioning, and long recovery times in case of failures. DéjàVu addresses all these challenges using a versatile and efficient KV cache streaming library (DéjàVuLib). Using DéjàVuLib, we propose and implement efficient prompt-token disaggregation to reduce pipeline bubbles, microbatch swapping for efficient GPU memory management, and state replication for fault-tolerance. We highlight the efficacy of these solutions on a range of large models across cloud deployments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-strati24a, title = {DéjàVu: {KV}-cache Streaming for Fast, Fault-tolerant Generative {LLM} Serving}, author = {Strati, Foteini and Mcallister, Sara and Phanishayee, Amar and Tarnawski, Jakub and Klimovic, Ana}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {46745--46771}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/strati24a/strati24a.pdf}, url = {https://proceedings.mlr.press/v235/strati24a.html}, abstract = {Distributed LLM serving is costly and often underutilizes hardware accelerators due to three key challenges: bubbles in pipeline-parallel deployments caused by the bimodal latency of prompt and token processing, GPU memory overprovisioning, and long recovery times in case of failures. DéjàVu addresses all these challenges using a versatile and efficient KV cache streaming library (DéjàVuLib). Using DéjàVuLib, we propose and implement efficient prompt-token disaggregation to reduce pipeline bubbles, microbatch swapping for efficient GPU memory management, and state replication for fault-tolerance. We highlight the efficacy of these solutions on a range of large models across cloud deployments.} }
Endnote
%0 Conference Paper %T DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving %A Foteini Strati %A Sara Mcallister %A Amar Phanishayee %A Jakub Tarnawski %A Ana Klimovic %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-strati24a %I PMLR %P 46745--46771 %U https://proceedings.mlr.press/v235/strati24a.html %V 235 %X Distributed LLM serving is costly and often underutilizes hardware accelerators due to three key challenges: bubbles in pipeline-parallel deployments caused by the bimodal latency of prompt and token processing, GPU memory overprovisioning, and long recovery times in case of failures. DéjàVu addresses all these challenges using a versatile and efficient KV cache streaming library (DéjàVuLib). Using DéjàVuLib, we propose and implement efficient prompt-token disaggregation to reduce pipeline bubbles, microbatch swapping for efficient GPU memory management, and state replication for fault-tolerance. We highlight the efficacy of these solutions on a range of large models across cloud deployments.
APA
Strati, F., Mcallister, S., Phanishayee, A., Tarnawski, J. & Klimovic, A.. (2024). DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:46745-46771 Available from https://proceedings.mlr.press/v235/strati24a.html.

Related Material