Continual Pre-Training is (not) What You Need in Domain Adaptation

Pin-Er Chen, Da Chen Lian, Shu-Kai Hsieh, SIEH-CHUEN HUANG, Hsuan-Lei Shao, Jun Wei Chiu, Yang-Hsien Lin, Zih-Ching Chen, Cheng-Kuang Lee, Eddie Tzungchi Huang, Simon See
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:543-557, 2025.

Abstract

The recent advances in Legal Large Language Models (LLMs) have transformed the landscape of legal research and practice by automating tasks, enhancing research precision, and supporting complex decision-making processes. However, effectively adapting LLMs to the legal domain remains challenging due to the complexity of legal reasoning, the need for precise interpretation of specialized language, and the potential for hallucinations. This paper examines the efficacy of Domain-Adaptive Continual Pre-Training (DACP) in improving the legal reasoning capabilities of LLMs. Through a series of experiments on legal reasoning tasks within the Taiwanese legal framework, we demonstrate that while DACP enhances domain-specific knowledge, it does not uniformly improve performance across all legal tasks. We discuss the trade-offs involved in DACP, particularly its impact on model generalization and performance in prompt-based tasks, and propose directions for future research to optimize domain adaptation strategies in legal AI.

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-chen25a, title = {Continual Pre-Training is (not) What You Need in Domain Adaptation}, author = {Chen, Pin-Er and Lian, Da Chen and Hsieh, Shu-Kai and HUANG, SIEH-CHUEN and Shao, Hsuan-Lei and Chiu, Jun Wei and Lin, Yang-Hsien and Chen, Zih-Ching and Lee, Cheng-Kuang and Huang, Eddie Tzungchi and See, Simon}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {543--557}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/chen25a/chen25a.pdf}, url = {https://proceedings.mlr.press/v304/chen25a.html}, abstract = {The recent advances in Legal Large Language Models (LLMs) have transformed the landscape of legal research and practice by automating tasks, enhancing research precision, and supporting complex decision-making processes. However, effectively adapting LLMs to the legal domain remains challenging due to the complexity of legal reasoning, the need for precise interpretation of specialized language, and the potential for hallucinations. This paper examines the efficacy of Domain-Adaptive Continual Pre-Training (DACP) in improving the legal reasoning capabilities of LLMs. Through a series of experiments on legal reasoning tasks within the Taiwanese legal framework, we demonstrate that while DACP enhances domain-specific knowledge, it does not uniformly improve performance across all legal tasks. We discuss the trade-offs involved in DACP, particularly its impact on model generalization and performance in prompt-based tasks, and propose directions for future research to optimize domain adaptation strategies in legal AI.} }
Endnote
%0 Conference Paper %T Continual Pre-Training is (not) What You Need in Domain Adaptation %A Pin-Er Chen %A Da Chen Lian %A Shu-Kai Hsieh %A SIEH-CHUEN HUANG %A Hsuan-Lei Shao %A Jun Wei Chiu %A Yang-Hsien Lin %A Zih-Ching Chen %A Cheng-Kuang Lee %A Eddie Tzungchi Huang %A Simon See %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-chen25a %I PMLR %P 543--557 %U https://proceedings.mlr.press/v304/chen25a.html %V 304 %X The recent advances in Legal Large Language Models (LLMs) have transformed the landscape of legal research and practice by automating tasks, enhancing research precision, and supporting complex decision-making processes. However, effectively adapting LLMs to the legal domain remains challenging due to the complexity of legal reasoning, the need for precise interpretation of specialized language, and the potential for hallucinations. This paper examines the efficacy of Domain-Adaptive Continual Pre-Training (DACP) in improving the legal reasoning capabilities of LLMs. Through a series of experiments on legal reasoning tasks within the Taiwanese legal framework, we demonstrate that while DACP enhances domain-specific knowledge, it does not uniformly improve performance across all legal tasks. We discuss the trade-offs involved in DACP, particularly its impact on model generalization and performance in prompt-based tasks, and propose directions for future research to optimize domain adaptation strategies in legal AI.
APA
Chen, P., Lian, D.C., Hsieh, S., HUANG, S., Shao, H., Chiu, J.W., Lin, Y., Chen, Z., Lee, C., Huang, E.T. & See, S.. (2025). Continual Pre-Training is (not) What You Need in Domain Adaptation. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:543-557 Available from https://proceedings.mlr.press/v304/chen25a.html.

Related Material