[edit]
Selective Fine-tuning on LLM-labeled Data May Reduce Reliance on Human Annotation: A Case Study Using Schedule-of-Event Table Detection
Proceedings of the 9th Machine Learning for Healthcare Conference, PMLR 252, 2024.
Abstract
Large Language Models (LLMs) have demonstrated their efficacy across a broad spectrum of tasks in healthcare applications. However, often LLMs need to be fine-tuned on task specific expert-annotated data to achieve optimal performance, which can be expensive and time consuming. In this study, we fine-tune PaLM-2 with parameter efficient fine-tuning (PEFT) using noisy labels obtained from Gemini-pro 1.0 for the detection of Schedule-of-Event (SoE) tables, which specify care plan in clinical trial protocols. We introduce a filtering mechanism to select high-confidence labels for this table classification task, thereby reducing the noise in the auto-generated labels. We find that the fine-tuned PaLM-2 with filtered labels outperforms Gemini Pro 1.0 and other LLMs on this task and achieves performance close to PaLM-2 fine-tuned on non-expert human annotations. Our results show that leveraging LLM-generated labels, coupled with strategic filtering can be a viable and cost-effective strategy for improving LLM performance on specialized tasks, especially in domains where expert annotations are scarce, expensive, or time-consuming to obtain.