TSMCR: Two-stage Supervised Multi-modality Contrastive Representation for Ultrasound-based Breast Cancer Diagnosis

Bangming Gong
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:1032-1047, 2025.

Abstract

Contrastive learning has demonstrated great performance in breast cancer diagnosis. However, few existing works inspect label information in contrastive representation learning, especially for multi-modality ultrasound scenes. In this work, a two-stage supervised multi-modality contrastive representation classification network (TSMCR) is proposed for assisting breast cancer diagnosis on the multimodality ultrasound. TSMCR consists of two-stage supervised multimodality contrastive learning (SMCL) and deep support vector machine (DSVM). By a novel contrastive loss, SMCL handles the consistency between modalities and the sample separability. Further, two-stage SMCL learns expressive representation by gradually pulling the similar samples of positive pairs closer and pushing the dissimilar samples of negative pairs apart in the projection space. Besides, on the fusion of the multi-level contrastive representation, DSVM is to jointly learn the representation network and classifier again in a unified framework to improve the generation performance. The experimental results on the multimodality ultrasound dataset show the proposed TSMCR achieves superior performance with an accuracy of 87.51%, sensitivity of 86.67%, and specificity of 88.36%.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-gong25a, title = {{TSMCR}: {T}wo-stage Supervised Multi-modality Contrastive Representation for Ultrasound-based Breast Cancer Diagnosis}, author = {Gong, Bangming}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {1032--1047}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/gong25a/gong25a.pdf}, url = {https://proceedings.mlr.press/v260/gong25a.html}, abstract = {Contrastive learning has demonstrated great performance in breast cancer diagnosis. However, few existing works inspect label information in contrastive representation learning, especially for multi-modality ultrasound scenes. In this work, a two-stage supervised multi-modality contrastive representation classification network (TSMCR) is proposed for assisting breast cancer diagnosis on the multimodality ultrasound. TSMCR consists of two-stage supervised multimodality contrastive learning (SMCL) and deep support vector machine (DSVM). By a novel contrastive loss, SMCL handles the consistency between modalities and the sample separability. Further, two-stage SMCL learns expressive representation by gradually pulling the similar samples of positive pairs closer and pushing the dissimilar samples of negative pairs apart in the projection space. Besides, on the fusion of the multi-level contrastive representation, DSVM is to jointly learn the representation network and classifier again in a unified framework to improve the generation performance. The experimental results on the multimodality ultrasound dataset show the proposed TSMCR achieves superior performance with an accuracy of 87.51%, sensitivity of 86.67%, and specificity of 88.36%.} }
Endnote
%0 Conference Paper %T TSMCR: Two-stage Supervised Multi-modality Contrastive Representation for Ultrasound-based Breast Cancer Diagnosis %A Bangming Gong %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-gong25a %I PMLR %P 1032--1047 %U https://proceedings.mlr.press/v260/gong25a.html %V 260 %X Contrastive learning has demonstrated great performance in breast cancer diagnosis. However, few existing works inspect label information in contrastive representation learning, especially for multi-modality ultrasound scenes. In this work, a two-stage supervised multi-modality contrastive representation classification network (TSMCR) is proposed for assisting breast cancer diagnosis on the multimodality ultrasound. TSMCR consists of two-stage supervised multimodality contrastive learning (SMCL) and deep support vector machine (DSVM). By a novel contrastive loss, SMCL handles the consistency between modalities and the sample separability. Further, two-stage SMCL learns expressive representation by gradually pulling the similar samples of positive pairs closer and pushing the dissimilar samples of negative pairs apart in the projection space. Besides, on the fusion of the multi-level contrastive representation, DSVM is to jointly learn the representation network and classifier again in a unified framework to improve the generation performance. The experimental results on the multimodality ultrasound dataset show the proposed TSMCR achieves superior performance with an accuracy of 87.51%, sensitivity of 86.67%, and specificity of 88.36%.
APA
Gong, B.. (2025). TSMCR: Two-stage Supervised Multi-modality Contrastive Representation for Ultrasound-based Breast Cancer Diagnosis. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:1032-1047 Available from https://proceedings.mlr.press/v260/gong25a.html.

Related Material