From Basic to Extra Features: Hypergraph Transformer Pretrain-then-Finetuning for Balanced Clinical Predictions on EHR

Ran Xu, Yiwen Lu, Chang Liu, Yong Chen, Yan Sun, Xiao Hu, Joyce C Ho, Carl Yang
Proceedings of the fifth Conference on Health, Inference, and Learning, PMLR 248:182-197, 2024.

Abstract

Electronic Health Records (EHRs) contain rich patient information and are crucial for clinical research and practice. In recent years, deep learning models have been applied to EHRs, but they often rely on massive features, which may not be readily available for all patients. We propose \ours{}\footnote{Short for \textbf{H}ypergraph \textbf{T}ransformer \textbf{P}retrain-then-Finetuning with \textbf{S}moo\textbf{t}hness-induced regularization \textbf{a}nd \textbf{R}eweighting.}, which leverages hypergraph structures with a pretrain-then-finetune framework for modeling EHR data, enabling seamless integration of additional features. Additionally, we design two techniques, namely (1) \emph{Smoothness-inducing Regularization} and (2) \emph{Group-balanced Reweighting}, to enhance the model’s robustness during finetuning. Through experiments conducted on two real EHR datasets, we demonstrate that \ours{} consistently outperforms va

Cite this Paper


BibTeX
@InProceedings{pmlr-v248-xu24a, title = {From Basic to Extra Features: Hypergraph Transformer Pretrain-then-Finetuning for Balanced Clinical Predictions on EHR}, author = {Xu, Ran and Lu, Yiwen and Liu, Chang and Chen, Yong and Sun, Yan and Hu, Xiao and Ho, Joyce C and Yang, Carl}, booktitle = {Proceedings of the fifth Conference on Health, Inference, and Learning}, pages = {182--197}, year = {2024}, editor = {Pollard, Tom and Choi, Edward and Singhal, Pankhuri and Hughes, Michael and Sizikova, Elena and Mortazavi, Bobak and Chen, Irene and Wang, Fei and Sarker, Tasmie and McDermott, Matthew and Ghassemi, Marzyeh}, volume = {248}, series = {Proceedings of Machine Learning Research}, month = {27--28 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v248/main/assets/xu24a/xu24a.pdf}, url = {https://proceedings.mlr.press/v248/xu24a.html}, abstract = {Electronic Health Records (EHRs) contain rich patient information and are crucial for clinical research and practice. In recent years, deep learning models have been applied to EHRs, but they often rely on massive features, which may not be readily available for all patients. We propose \ours{}\footnote{Short for \textbf{H}ypergraph \textbf{T}ransformer \textbf{P}retrain-then-Finetuning with \textbf{S}moo\textbf{t}hness-induced regularization \textbf{a}nd \textbf{R}eweighting.}, which leverages hypergraph structures with a pretrain-then-finetune framework for modeling EHR data, enabling seamless integration of additional features. Additionally, we design two techniques, namely (1) \emph{Smoothness-inducing Regularization} and (2) \emph{Group-balanced Reweighting}, to enhance the model’s robustness during finetuning. Through experiments conducted on two real EHR datasets, we demonstrate that \ours{} consistently outperforms va} }
Endnote
%0 Conference Paper %T From Basic to Extra Features: Hypergraph Transformer Pretrain-then-Finetuning for Balanced Clinical Predictions on EHR %A Ran Xu %A Yiwen Lu %A Chang Liu %A Yong Chen %A Yan Sun %A Xiao Hu %A Joyce C Ho %A Carl Yang %B Proceedings of the fifth Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2024 %E Tom Pollard %E Edward Choi %E Pankhuri Singhal %E Michael Hughes %E Elena Sizikova %E Bobak Mortazavi %E Irene Chen %E Fei Wang %E Tasmie Sarker %E Matthew McDermott %E Marzyeh Ghassemi %F pmlr-v248-xu24a %I PMLR %P 182--197 %U https://proceedings.mlr.press/v248/xu24a.html %V 248 %X Electronic Health Records (EHRs) contain rich patient information and are crucial for clinical research and practice. In recent years, deep learning models have been applied to EHRs, but they often rely on massive features, which may not be readily available for all patients. We propose \ours{}\footnote{Short for \textbf{H}ypergraph \textbf{T}ransformer \textbf{P}retrain-then-Finetuning with \textbf{S}moo\textbf{t}hness-induced regularization \textbf{a}nd \textbf{R}eweighting.}, which leverages hypergraph structures with a pretrain-then-finetune framework for modeling EHR data, enabling seamless integration of additional features. Additionally, we design two techniques, namely (1) \emph{Smoothness-inducing Regularization} and (2) \emph{Group-balanced Reweighting}, to enhance the model’s robustness during finetuning. Through experiments conducted on two real EHR datasets, we demonstrate that \ours{} consistently outperforms va
APA
Xu, R., Lu, Y., Liu, C., Chen, Y., Sun, Y., Hu, X., Ho, J.C. & Yang, C.. (2024). From Basic to Extra Features: Hypergraph Transformer Pretrain-then-Finetuning for Balanced Clinical Predictions on EHR. Proceedings of the fifth Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 248:182-197 Available from https://proceedings.mlr.press/v248/xu24a.html.

Related Material