CryChime: When Large Language Models Learn to Listen to Distant Cries - A Counterfactual PEFT Framework for Urgent Need Detection in Disaster Social Media

Junhong Cai, Geng Zhao, Jiaxin Li
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:1230-1245, 2025.

Abstract

In recent years, detecting instantaneously expressed urgent needs or requests in disaster-related posts from disaster-affected social media users has become crucial for disaster response and recovery. To address the gap that the performance of current urgent need detectors based on large language models (LLMs) is below requirements on this task from the domain of disaster response, we propose a novel insight: decomposing and inducting post content expressing disaster-induced urgent needs, into disaster event statements and disaster-induced appeals. The former, widely present and highly coarse-grained homogeneous across disaster-related posts, tends to introduce event-induced model bias leading to false recalls; while the latter, characterized by highly personalized, fine-grained and subjective phrasing, often challenge LLMs to allocate appropriate attentions to the corresponding tokens. In light of this, we propose CryChime, a novel model-agnostic parameter-efficient fine-tuning (PEFT) framework. CryChime represents disaster event statements in a bootstrapping style, and then removes the event-induced bias by orthogonal LoRA-based counterfactual learning. As fine-tuning steps increase, CryChime gradually disentangles the domain knowledge for understanding disaster event statements and disaster-induced appeals in candidate posts, then collaboratively leverage them in performing better urgent need detection. Experimental results on two benchmark datasets show that, compared to the strong baselines, CryChime can more effectively listen to the distant cries from the disaster-affected users. Our instruction-tuning data examples will be released in the further preprint version.

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-cai25a, title = {CryChime: When Large Language Models Learn to Listen to Distant Cries - A Counterfactual PEFT Framework for Urgent Need Detection in Disaster Social Media}, author = {Cai, Junhong and Zhao, Geng and Li, Jiaxin}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {1230--1245}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/cai25a/cai25a.pdf}, url = {https://proceedings.mlr.press/v304/cai25a.html}, abstract = {In recent years, detecting instantaneously expressed urgent needs or requests in disaster-related posts from disaster-affected social media users has become crucial for disaster response and recovery. To address the gap that the performance of current urgent need detectors based on large language models (LLMs) is below requirements on this task from the domain of disaster response, we propose a novel insight: decomposing and inducting post content expressing disaster-induced urgent needs, into disaster event statements and disaster-induced appeals. The former, widely present and highly coarse-grained homogeneous across disaster-related posts, tends to introduce event-induced model bias leading to false recalls; while the latter, characterized by highly personalized, fine-grained and subjective phrasing, often challenge LLMs to allocate appropriate attentions to the corresponding tokens. In light of this, we propose CryChime, a novel model-agnostic parameter-efficient fine-tuning (PEFT) framework. CryChime represents disaster event statements in a bootstrapping style, and then removes the event-induced bias by orthogonal LoRA-based counterfactual learning. As fine-tuning steps increase, CryChime gradually disentangles the domain knowledge for understanding disaster event statements and disaster-induced appeals in candidate posts, then collaboratively leverage them in performing better urgent need detection. Experimental results on two benchmark datasets show that, compared to the strong baselines, CryChime can more effectively listen to the distant cries from the disaster-affected users. Our instruction-tuning data examples will be released in the further preprint version.} }
Endnote
%0 Conference Paper %T CryChime: When Large Language Models Learn to Listen to Distant Cries - A Counterfactual PEFT Framework for Urgent Need Detection in Disaster Social Media %A Junhong Cai %A Geng Zhao %A Jiaxin Li %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-cai25a %I PMLR %P 1230--1245 %U https://proceedings.mlr.press/v304/cai25a.html %V 304 %X In recent years, detecting instantaneously expressed urgent needs or requests in disaster-related posts from disaster-affected social media users has become crucial for disaster response and recovery. To address the gap that the performance of current urgent need detectors based on large language models (LLMs) is below requirements on this task from the domain of disaster response, we propose a novel insight: decomposing and inducting post content expressing disaster-induced urgent needs, into disaster event statements and disaster-induced appeals. The former, widely present and highly coarse-grained homogeneous across disaster-related posts, tends to introduce event-induced model bias leading to false recalls; while the latter, characterized by highly personalized, fine-grained and subjective phrasing, often challenge LLMs to allocate appropriate attentions to the corresponding tokens. In light of this, we propose CryChime, a novel model-agnostic parameter-efficient fine-tuning (PEFT) framework. CryChime represents disaster event statements in a bootstrapping style, and then removes the event-induced bias by orthogonal LoRA-based counterfactual learning. As fine-tuning steps increase, CryChime gradually disentangles the domain knowledge for understanding disaster event statements and disaster-induced appeals in candidate posts, then collaboratively leverage them in performing better urgent need detection. Experimental results on two benchmark datasets show that, compared to the strong baselines, CryChime can more effectively listen to the distant cries from the disaster-affected users. Our instruction-tuning data examples will be released in the further preprint version.
APA
Cai, J., Zhao, G. & Li, J.. (2025). CryChime: When Large Language Models Learn to Listen to Distant Cries - A Counterfactual PEFT Framework for Urgent Need Detection in Disaster Social Media. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:1230-1245 Available from https://proceedings.mlr.press/v304/cai25a.html.

Related Material