Efficiently Serving Large Multimodal Models Using EPD Disaggregation

Gursimran Singh, Xinglu Wang, Yifan Hu, Timothy Tin Long Yu, Linzi Xing, Wei Jiang, Zhefeng Wang, Bai Xiaolong, Yi Li, Ying Xiong, Yong Zhang, Zhenan Fan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:55740-55756, 2025.

Abstract

Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively affects key Service Level Objectives (SLOs), such as time to first token (TTFT) and time per output token (TPOT). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouples these steps, unlocking new opportunities and optimizations. These include a mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize the encoding load within a request, a module for optimal resource allocation for disaggregated serving, and a novel role-switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15$\times$ lower peak memory utilization), batch sizes (up to 22$\times$ larger), 10$\times$ more images per request, and 2.2$\times$ larger KV caches. Furthermore, it leads to significant improvements in SLO attainment (up to 90–100% improvement) and TTFT (up to 71% reduction), compared to systems that do not disaggregate. The code is available at https://github.com/vbdi/epdserve.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-singh25d, title = {Efficiently Serving Large Multimodal Models Using {EPD} Disaggregation}, author = {Singh, Gursimran and Wang, Xinglu and Hu, Yifan and Yu, Timothy Tin Long and Xing, Linzi and Jiang, Wei and Wang, Zhefeng and Xiaolong, Bai and Li, Yi and Xiong, Ying and Zhang, Yong and Fan, Zhenan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {55740--55756}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/singh25d/singh25d.pdf}, url = {https://proceedings.mlr.press/v267/singh25d.html}, abstract = {Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively affects key Service Level Objectives (SLOs), such as time to first token (TTFT) and time per output token (TPOT). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouples these steps, unlocking new opportunities and optimizations. These include a mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize the encoding load within a request, a module for optimal resource allocation for disaggregated serving, and a novel role-switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15$\times$ lower peak memory utilization), batch sizes (up to 22$\times$ larger), 10$\times$ more images per request, and 2.2$\times$ larger KV caches. Furthermore, it leads to significant improvements in SLO attainment (up to 90–100% improvement) and TTFT (up to 71% reduction), compared to systems that do not disaggregate. The code is available at https://github.com/vbdi/epdserve.} }
Endnote
%0 Conference Paper %T Efficiently Serving Large Multimodal Models Using EPD Disaggregation %A Gursimran Singh %A Xinglu Wang %A Yifan Hu %A Timothy Tin Long Yu %A Linzi Xing %A Wei Jiang %A Zhefeng Wang %A Bai Xiaolong %A Yi Li %A Ying Xiong %A Yong Zhang %A Zhenan Fan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-singh25d %I PMLR %P 55740--55756 %U https://proceedings.mlr.press/v267/singh25d.html %V 267 %X Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively affects key Service Level Objectives (SLOs), such as time to first token (TTFT) and time per output token (TPOT). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouples these steps, unlocking new opportunities and optimizations. These include a mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize the encoding load within a request, a module for optimal resource allocation for disaggregated serving, and a novel role-switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15$\times$ lower peak memory utilization), batch sizes (up to 22$\times$ larger), 10$\times$ more images per request, and 2.2$\times$ larger KV caches. Furthermore, it leads to significant improvements in SLO attainment (up to 90–100% improvement) and TTFT (up to 71% reduction), compared to systems that do not disaggregate. The code is available at https://github.com/vbdi/epdserve.
APA
Singh, G., Wang, X., Hu, Y., Yu, T.T.L., Xing, L., Jiang, W., Wang, Z., Xiaolong, B., Li, Y., Xiong, Y., Zhang, Y. & Fan, Z.. (2025). Efficiently Serving Large Multimodal Models Using EPD Disaggregation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:55740-55756 Available from https://proceedings.mlr.press/v267/singh25d.html.

Related Material