R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts

Zhongyang Li, Ziyue Li, Tianyi Zhou
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:35292-35316, 2025.

Abstract

In large multimodal models (LMMs), the perception of non-language modalities (e.g., visual representations) is usually not on par with the large language models (LLMs)’ powerful reasoning capabilities, deterring LMMs’ performance on challenging downstream tasks. This weakness has been recently mitigated by replacing the vision encoder with a mixture-of-experts (MoE), which provides rich, multi-granularity, and diverse representations required by different downstream tasks. The performance of multimodal MoE largely depends on its router, which reweights and mixes the representations of different experts for each input. However, we find that the end-to-end trained router does not always produce the optimal routing weights for every test sample. To bridge the gap, we propose a novel and efficient method ”Re-Routing in Test-Time (R2-T2)” that locally optimizes the vector of routing weights in test-time by moving it toward those vectors of the correctly predicted samples in a neighborhood of the test sample. We propose three R2-T2 strategies with different optimization objectives and neighbor-search spaces. R2-T2 consistently and significantly improves state-of-the-art LMMs’ performance on challenging multimodal benchmarks of diverse tasks, without training any parameters in the base model. Our code can be accessed here.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-li25bc, title = {R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts}, author = {Li, Zhongyang and Li, Ziyue and Zhou, Tianyi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {35292--35316}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/li25bc/li25bc.pdf}, url = {https://proceedings.mlr.press/v267/li25bc.html}, abstract = {In large multimodal models (LMMs), the perception of non-language modalities (e.g., visual representations) is usually not on par with the large language models (LLMs)’ powerful reasoning capabilities, deterring LMMs’ performance on challenging downstream tasks. This weakness has been recently mitigated by replacing the vision encoder with a mixture-of-experts (MoE), which provides rich, multi-granularity, and diverse representations required by different downstream tasks. The performance of multimodal MoE largely depends on its router, which reweights and mixes the representations of different experts for each input. However, we find that the end-to-end trained router does not always produce the optimal routing weights for every test sample. To bridge the gap, we propose a novel and efficient method ”Re-Routing in Test-Time (R2-T2)” that locally optimizes the vector of routing weights in test-time by moving it toward those vectors of the correctly predicted samples in a neighborhood of the test sample. We propose three R2-T2 strategies with different optimization objectives and neighbor-search spaces. R2-T2 consistently and significantly improves state-of-the-art LMMs’ performance on challenging multimodal benchmarks of diverse tasks, without training any parameters in the base model. Our code can be accessed here.} }
Endnote
%0 Conference Paper %T R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts %A Zhongyang Li %A Ziyue Li %A Tianyi Zhou %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-li25bc %I PMLR %P 35292--35316 %U https://proceedings.mlr.press/v267/li25bc.html %V 267 %X In large multimodal models (LMMs), the perception of non-language modalities (e.g., visual representations) is usually not on par with the large language models (LLMs)’ powerful reasoning capabilities, deterring LMMs’ performance on challenging downstream tasks. This weakness has been recently mitigated by replacing the vision encoder with a mixture-of-experts (MoE), which provides rich, multi-granularity, and diverse representations required by different downstream tasks. The performance of multimodal MoE largely depends on its router, which reweights and mixes the representations of different experts for each input. However, we find that the end-to-end trained router does not always produce the optimal routing weights for every test sample. To bridge the gap, we propose a novel and efficient method ”Re-Routing in Test-Time (R2-T2)” that locally optimizes the vector of routing weights in test-time by moving it toward those vectors of the correctly predicted samples in a neighborhood of the test sample. We propose three R2-T2 strategies with different optimization objectives and neighbor-search spaces. R2-T2 consistently and significantly improves state-of-the-art LMMs’ performance on challenging multimodal benchmarks of diverse tasks, without training any parameters in the base model. Our code can be accessed here.
APA
Li, Z., Li, Z. & Zhou, T.. (2025). R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:35292-35316 Available from https://proceedings.mlr.press/v267/li25bc.html.

Related Material