Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving

Yuxuan Zhou, Xien Liu, Chenwei Yan, Chen Ning, Xiao Zhang, Boxun Li, Xiangling Fu, Shijin Wang, Guoping Hu, Yu Wang, Ji Wu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:78984-79003, 2025.

Abstract

Large language models (LLMs) have demonstrated remarkable performance on various medical benchmarks, but their capabilities across different cognitive levels remain underexplored. Inspired by Bloom’s Taxonomy, we propose a multi-cognitive-level evaluation framework for assessing LLMs in the medical domain in this study. The framework integrates existing medical datasets and introduces tasks targeting three cognitive levels: preliminary knowledge grasp, comprehensive knowledge application, and scenario-based problem solving. Using this framework, we systematically evaluate state-of-the-art general and medical LLMs from six prominent families: Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Our findings reveal a significant performance decline as cognitive complexity increases across evaluated models, with model size playing a more critical role in performance at higher cognitive levels. Our study highlights the need to enhance LLMs’ medical capabilities at higher cognitive levels and provides insights for developing LLMs suited to real-world medical applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhou25n, title = {Evaluating {LLM}s Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving}, author = {Zhou, Yuxuan and Liu, Xien and Yan, Chenwei and Ning, Chen and Zhang, Xiao and Li, Boxun and Fu, Xiangling and Wang, Shijin and Hu, Guoping and Wang, Yu and Wu, Ji}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {78984--79003}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhou25n/zhou25n.pdf}, url = {https://proceedings.mlr.press/v267/zhou25n.html}, abstract = {Large language models (LLMs) have demonstrated remarkable performance on various medical benchmarks, but their capabilities across different cognitive levels remain underexplored. Inspired by Bloom’s Taxonomy, we propose a multi-cognitive-level evaluation framework for assessing LLMs in the medical domain in this study. The framework integrates existing medical datasets and introduces tasks targeting three cognitive levels: preliminary knowledge grasp, comprehensive knowledge application, and scenario-based problem solving. Using this framework, we systematically evaluate state-of-the-art general and medical LLMs from six prominent families: Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Our findings reveal a significant performance decline as cognitive complexity increases across evaluated models, with model size playing a more critical role in performance at higher cognitive levels. Our study highlights the need to enhance LLMs’ medical capabilities at higher cognitive levels and provides insights for developing LLMs suited to real-world medical applications.} }
Endnote
%0 Conference Paper %T Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving %A Yuxuan Zhou %A Xien Liu %A Chenwei Yan %A Chen Ning %A Xiao Zhang %A Boxun Li %A Xiangling Fu %A Shijin Wang %A Guoping Hu %A Yu Wang %A Ji Wu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhou25n %I PMLR %P 78984--79003 %U https://proceedings.mlr.press/v267/zhou25n.html %V 267 %X Large language models (LLMs) have demonstrated remarkable performance on various medical benchmarks, but their capabilities across different cognitive levels remain underexplored. Inspired by Bloom’s Taxonomy, we propose a multi-cognitive-level evaluation framework for assessing LLMs in the medical domain in this study. The framework integrates existing medical datasets and introduces tasks targeting three cognitive levels: preliminary knowledge grasp, comprehensive knowledge application, and scenario-based problem solving. Using this framework, we systematically evaluate state-of-the-art general and medical LLMs from six prominent families: Llama, Qwen, Gemma, Phi, GPT, and DeepSeek. Our findings reveal a significant performance decline as cognitive complexity increases across evaluated models, with model size playing a more critical role in performance at higher cognitive levels. Our study highlights the need to enhance LLMs’ medical capabilities at higher cognitive levels and provides insights for developing LLMs suited to real-world medical applications.
APA
Zhou, Y., Liu, X., Yan, C., Ning, C., Zhang, X., Li, B., Fu, X., Wang, S., Hu, G., Wang, Y. & Wu, J.. (2025). Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:78984-79003 Available from https://proceedings.mlr.press/v267/zhou25n.html.

Related Material