OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance

Yongqiang Yao, Jingru Tan, Feizhao Zhang, Jiahao Hu, Yazhe Niu, Jin Xin, Bo Li, Pengfei Liu, Ruihao Gong, Dahua Lin, Ningyi Xu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:71765-71779, 2025.

Abstract

Vision-language instruction-tuning models have recently achieved significant performance improvements. In this work, we discover that large-scale 3D parallel training on those models leads to an imbalanced computation load across different devices. The vision and language parts are inherently heterogeneous: their data distribution and model architecture differ significantly, which affects distributed training efficiency. To address this issue, we rebalance the computational load from data, model, and memory perspectives, achieving more balanced computation across devices. Specifically, for the data, instances are grouped into new balanced mini-batches within and across devices. A search-based method is employed for the model to achieve a more balanced partitioning. For memory optimization, we adaptively adjust the re-computation strategy for each partition to utilize the available memory fully. These three perspectives are not independent but are closely connected, forming an omniverse balanced training framework. Extensive experiments are conducted to validate the effectiveness of our method. Compared with the open-source training code of InternVL-Chat, training time is reduced greatly, achieving about 1.8$\times$ speed-up. Our method’s efficacy and generalizability are further validated across various models and datasets. Codes will be released at https://github.com/ModelTC/OmniBal.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-yao25d, title = {{O}mni{B}al: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance}, author = {Yao, Yongqiang and Tan, Jingru and Zhang, Feizhao and Hu, Jiahao and Niu, Yazhe and Xin, Jin and Li, Bo and Liu, Pengfei and Gong, Ruihao and Lin, Dahua and Xu, Ningyi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {71765--71779}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/yao25d/yao25d.pdf}, url = {https://proceedings.mlr.press/v267/yao25d.html}, abstract = {Vision-language instruction-tuning models have recently achieved significant performance improvements. In this work, we discover that large-scale 3D parallel training on those models leads to an imbalanced computation load across different devices. The vision and language parts are inherently heterogeneous: their data distribution and model architecture differ significantly, which affects distributed training efficiency. To address this issue, we rebalance the computational load from data, model, and memory perspectives, achieving more balanced computation across devices. Specifically, for the data, instances are grouped into new balanced mini-batches within and across devices. A search-based method is employed for the model to achieve a more balanced partitioning. For memory optimization, we adaptively adjust the re-computation strategy for each partition to utilize the available memory fully. These three perspectives are not independent but are closely connected, forming an omniverse balanced training framework. Extensive experiments are conducted to validate the effectiveness of our method. Compared with the open-source training code of InternVL-Chat, training time is reduced greatly, achieving about 1.8$\times$ speed-up. Our method’s efficacy and generalizability are further validated across various models and datasets. Codes will be released at https://github.com/ModelTC/OmniBal.} }
Endnote
%0 Conference Paper %T OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance %A Yongqiang Yao %A Jingru Tan %A Feizhao Zhang %A Jiahao Hu %A Yazhe Niu %A Jin Xin %A Bo Li %A Pengfei Liu %A Ruihao Gong %A Dahua Lin %A Ningyi Xu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-yao25d %I PMLR %P 71765--71779 %U https://proceedings.mlr.press/v267/yao25d.html %V 267 %X Vision-language instruction-tuning models have recently achieved significant performance improvements. In this work, we discover that large-scale 3D parallel training on those models leads to an imbalanced computation load across different devices. The vision and language parts are inherently heterogeneous: their data distribution and model architecture differ significantly, which affects distributed training efficiency. To address this issue, we rebalance the computational load from data, model, and memory perspectives, achieving more balanced computation across devices. Specifically, for the data, instances are grouped into new balanced mini-batches within and across devices. A search-based method is employed for the model to achieve a more balanced partitioning. For memory optimization, we adaptively adjust the re-computation strategy for each partition to utilize the available memory fully. These three perspectives are not independent but are closely connected, forming an omniverse balanced training framework. Extensive experiments are conducted to validate the effectiveness of our method. Compared with the open-source training code of InternVL-Chat, training time is reduced greatly, achieving about 1.8$\times$ speed-up. Our method’s efficacy and generalizability are further validated across various models and datasets. Codes will be released at https://github.com/ModelTC/OmniBal.
APA
Yao, Y., Tan, J., Zhang, F., Hu, J., Niu, Y., Xin, J., Li, B., Liu, P., Gong, R., Lin, D. & Xu, N.. (2025). OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:71765-71779 Available from https://proceedings.mlr.press/v267/yao25d.html.

Related Material