Visual Medical Entity Linking with VELCRO

Kathryn Carbone, Liam Hebert, Robin Cohen, Lukasz Golab
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:1126-1140, 2026.

Abstract

We study a visual entity linking ({VEL}) problem in which a user selects a region of interest ({RoI}) in an image (e.g., a brain tumour) and queries a textual knowledge base ({KB}) for information about the {RoI}. To solve this problem using cross-modal embeddings such as {CLIP}, we can encode the {KB} entries, then either encode the whole image or just the cropped {RoI}, and run a similarity search between the query and the {KB} embeddings. However, using the entire image as the query may retrieve {KB} entries related to other aspects of the image beyond the {RoI}, whereas using the {RoI} alone as the query ignores context, which is critical for recognizing and linking complex entities in medical images. To address these shortcomings, we propose {VELCRO} – visual entity linking with contrastive {RoI} alignment – which adapts an image segmentation model to {VEL} by aligning the contextual embeddings produced by its decoder with the {KB} using contrastive learning. This strategy preserves the information contained in the surrounding image while focusing {KB} alignment on the {RoI}. Experiments on medical {VEL} show that {VELCRO} achieves 95.3% linking accuracy compared to 83.9% or lower for baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-carbone26a, title = {Visual Medical Entity Linking with {VELCRO}}, author = {Carbone, Kathryn and Hebert, Liam and Cohen, Robin and Golab, Lukasz}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {1126--1140}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/carbone26a/carbone26a.pdf}, url = {https://proceedings.mlr.press/v297/carbone26a.html}, abstract = {We study a visual entity linking ({VEL}) problem in which a user selects a region of interest ({RoI}) in an image (e.g., a brain tumour) and queries a textual knowledge base ({KB}) for information about the {RoI}. To solve this problem using cross-modal embeddings such as {CLIP}, we can encode the {KB} entries, then either encode the whole image or just the cropped {RoI}, and run a similarity search between the query and the {KB} embeddings. However, using the entire image as the query may retrieve {KB} entries related to other aspects of the image beyond the {RoI}, whereas using the {RoI} alone as the query ignores context, which is critical for recognizing and linking complex entities in medical images. To address these shortcomings, we propose {VELCRO} – visual entity linking with contrastive {RoI} alignment – which adapts an image segmentation model to {VEL} by aligning the contextual embeddings produced by its decoder with the {KB} using contrastive learning. This strategy preserves the information contained in the surrounding image while focusing {KB} alignment on the {RoI}. Experiments on medical {VEL} show that {VELCRO} achieves 95.3% linking accuracy compared to 83.9% or lower for baselines.} }
Endnote
%0 Conference Paper %T Visual Medical Entity Linking with VELCRO %A Kathryn Carbone %A Liam Hebert %A Robin Cohen %A Lukasz Golab %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-carbone26a %I PMLR %P 1126--1140 %U https://proceedings.mlr.press/v297/carbone26a.html %V 297 %X We study a visual entity linking ({VEL}) problem in which a user selects a region of interest ({RoI}) in an image (e.g., a brain tumour) and queries a textual knowledge base ({KB}) for information about the {RoI}. To solve this problem using cross-modal embeddings such as {CLIP}, we can encode the {KB} entries, then either encode the whole image or just the cropped {RoI}, and run a similarity search between the query and the {KB} embeddings. However, using the entire image as the query may retrieve {KB} entries related to other aspects of the image beyond the {RoI}, whereas using the {RoI} alone as the query ignores context, which is critical for recognizing and linking complex entities in medical images. To address these shortcomings, we propose {VELCRO} – visual entity linking with contrastive {RoI} alignment – which adapts an image segmentation model to {VEL} by aligning the contextual embeddings produced by its decoder with the {KB} using contrastive learning. This strategy preserves the information contained in the surrounding image while focusing {KB} alignment on the {RoI}. Experiments on medical {VEL} show that {VELCRO} achieves 95.3% linking accuracy compared to 83.9% or lower for baselines.
APA
Carbone, K., Hebert, L., Cohen, R. & Golab, L.. (2026). Visual Medical Entity Linking with VELCRO. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:1126-1140 Available from https://proceedings.mlr.press/v297/carbone26a.html.

Related Material