m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning with Large Language Models

Xiaoke Huang, Juncheng Wu, Hui Liu, Xianfeng Tang, Yuyin Zhou
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:369-383, 2026.

Abstract

Test-time scaling has emerged as a powerful technique for enhancing the reasoning capabilities of large language models ({LLM}s). However, its effectiveness in medical reasoning remains uncertain, as the medical domain fundamentally differs from mathematical tasks in terms of knowledge representation and decision-making processes. In this paper, we provide the first comprehensive investigation of test-time scaling for medical reasoning and present m1, a simple yet effective approach that increases a model’s medical reasoning capability at inference. Our evaluation across diverse medical tasks demonstrates that test-time scaling (by increasing the “thinking” token budget) consistently enhances medical reasoning, enabling lightweight fine-tuned models under 10B parameters to establish new state-of-the-art performance, while our 32B model achieves results comparable to previous 70B-scale medical {LLM}s. However, we identify an optimal reasoning token budget of approximately 4K, beyond which performance may degrade due to overthinking. Budget forcing, which extends test-time computation through iterative prompts (e.g., appending “Wait”), helps models double-check answers but does not necessarily improve the overall medical {QA} performance and, in some cases, even introduces errors into previously correct responses. Taken together, our case-by-case analysis further identifies insufficient medical knowledge as a key bottleneck that prevents further performance gains through test-time scaling. To overcome this constraint, we find that increasing data scale, improving data quality, and expanding model capacity consistently enhance medical knowledge grounding, enabling continued performance improvements—particularly on challenging medical benchmarks where smaller models reach saturation. These findings underscore fundamental differences between medical and mathematical reasoning in {LLM}s, highlighting that enriched medical knowledge, other than increased reasoning depth alone, is essential for fully realizing the benefits of test-time scaling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-huang26a, title = {m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning with Large Language Models}, author = {Huang, Xiaoke and Wu, Juncheng and Liu, Hui and Tang, Xianfeng and Zhou, Yuyin}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {369--383}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/huang26a/huang26a.pdf}, url = {https://proceedings.mlr.press/v297/huang26a.html}, abstract = {Test-time scaling has emerged as a powerful technique for enhancing the reasoning capabilities of large language models ({LLM}s). However, its effectiveness in medical reasoning remains uncertain, as the medical domain fundamentally differs from mathematical tasks in terms of knowledge representation and decision-making processes. In this paper, we provide the first comprehensive investigation of test-time scaling for medical reasoning and present m1, a simple yet effective approach that increases a model’s medical reasoning capability at inference. Our evaluation across diverse medical tasks demonstrates that test-time scaling (by increasing the “thinking” token budget) consistently enhances medical reasoning, enabling lightweight fine-tuned models under 10B parameters to establish new state-of-the-art performance, while our 32B model achieves results comparable to previous 70B-scale medical {LLM}s. However, we identify an optimal reasoning token budget of approximately 4K, beyond which performance may degrade due to overthinking. Budget forcing, which extends test-time computation through iterative prompts (e.g., appending “Wait”), helps models double-check answers but does not necessarily improve the overall medical {QA} performance and, in some cases, even introduces errors into previously correct responses. Taken together, our case-by-case analysis further identifies insufficient medical knowledge as a key bottleneck that prevents further performance gains through test-time scaling. To overcome this constraint, we find that increasing data scale, improving data quality, and expanding model capacity consistently enhance medical knowledge grounding, enabling continued performance improvements—particularly on challenging medical benchmarks where smaller models reach saturation. These findings underscore fundamental differences between medical and mathematical reasoning in {LLM}s, highlighting that enriched medical knowledge, other than increased reasoning depth alone, is essential for fully realizing the benefits of test-time scaling.} }
Endnote
%0 Conference Paper %T m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning with Large Language Models %A Xiaoke Huang %A Juncheng Wu %A Hui Liu %A Xianfeng Tang %A Yuyin Zhou %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-huang26a %I PMLR %P 369--383 %U https://proceedings.mlr.press/v297/huang26a.html %V 297 %X Test-time scaling has emerged as a powerful technique for enhancing the reasoning capabilities of large language models ({LLM}s). However, its effectiveness in medical reasoning remains uncertain, as the medical domain fundamentally differs from mathematical tasks in terms of knowledge representation and decision-making processes. In this paper, we provide the first comprehensive investigation of test-time scaling for medical reasoning and present m1, a simple yet effective approach that increases a model’s medical reasoning capability at inference. Our evaluation across diverse medical tasks demonstrates that test-time scaling (by increasing the “thinking” token budget) consistently enhances medical reasoning, enabling lightweight fine-tuned models under 10B parameters to establish new state-of-the-art performance, while our 32B model achieves results comparable to previous 70B-scale medical {LLM}s. However, we identify an optimal reasoning token budget of approximately 4K, beyond which performance may degrade due to overthinking. Budget forcing, which extends test-time computation through iterative prompts (e.g., appending “Wait”), helps models double-check answers but does not necessarily improve the overall medical {QA} performance and, in some cases, even introduces errors into previously correct responses. Taken together, our case-by-case analysis further identifies insufficient medical knowledge as a key bottleneck that prevents further performance gains through test-time scaling. To overcome this constraint, we find that increasing data scale, improving data quality, and expanding model capacity consistently enhance medical knowledge grounding, enabling continued performance improvements—particularly on challenging medical benchmarks where smaller models reach saturation. These findings underscore fundamental differences between medical and mathematical reasoning in {LLM}s, highlighting that enriched medical knowledge, other than increased reasoning depth alone, is essential for fully realizing the benefits of test-time scaling.
APA
Huang, X., Wu, J., Liu, H., Tang, X. & Zhou, Y.. (2026). m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning with Large Language Models. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:369-383 Available from https://proceedings.mlr.press/v297/huang26a.html.

Related Material