Topoformer: Topology-Infused Transformers for Medical Imaging

Sayoni Chakraborty, Philmore Koung, Baris Coskunuzer
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:23-40, 2026.

Abstract

Deep learning has transformed 2D medical imaging, but scaling to 3D volumes remains difficult due to high compute, scarce annotations, and the loss of global context in patch-based pipelines. We present Topoformer, a transformer framework that makes 3D classification both data- and compute-efficient by integrating topological priors. First, we introduce a sliding-band cubical filtration that replaces a single global persistent-homology pass with overlapping intensity bands, yielding an ordered sequence of Betti tokens (components, tunnels, cavities). These tokens act as transformer inputs, enabling multi-scale topological reasoning without early saturation. Second, we propose Topological Supervised Contrastive Learning (TopoSupCon), which treats the image and its label-preserving topological view as complementary modalities, reducing reliance on brittle geometric or generative augmentations. A lightweight TopoGate further lets the image softly weight multiple band widths per case. On 3D brain MRI tumor grading and chest CT benchmarks in low-data regimes, Topoformer achieves consistent gains over strong 3D CNN and ViT baselines, including improvements up to 12 AUC points and 8 accuracy points. Our results show that sequential, topology-aware representations provide a powerful inductive bias for volumetric medical image analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-chakraborty26a, title = {{Topoformer}: Topology-Infused Transformers for Medical Imaging}, author = {Chakraborty, Sayoni and Koung, Philmore and Coskunuzer, Baris}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {23--40}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/chakraborty26a/chakraborty26a.pdf}, url = {https://proceedings.mlr.press/v297/chakraborty26a.html}, abstract = {Deep learning has transformed 2D medical imaging, but scaling to 3D volumes remains difficult due to high compute, scarce annotations, and the loss of global context in patch-based pipelines. We present Topoformer, a transformer framework that makes 3D classification both data- and compute-efficient by integrating topological priors. First, we introduce a sliding-band cubical filtration that replaces a single global persistent-homology pass with overlapping intensity bands, yielding an ordered sequence of Betti tokens (components, tunnels, cavities). These tokens act as transformer inputs, enabling multi-scale topological reasoning without early saturation. Second, we propose Topological Supervised Contrastive Learning (TopoSupCon), which treats the image and its label-preserving topological view as complementary modalities, reducing reliance on brittle geometric or generative augmentations. A lightweight TopoGate further lets the image softly weight multiple band widths per case. On 3D brain MRI tumor grading and chest CT benchmarks in low-data regimes, Topoformer achieves consistent gains over strong 3D CNN and ViT baselines, including improvements up to 12 AUC points and 8 accuracy points. Our results show that sequential, topology-aware representations provide a powerful inductive bias for volumetric medical image analysis.} }
Endnote
%0 Conference Paper %T Topoformer: Topology-Infused Transformers for Medical Imaging %A Sayoni Chakraborty %A Philmore Koung %A Baris Coskunuzer %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-chakraborty26a %I PMLR %P 23--40 %U https://proceedings.mlr.press/v297/chakraborty26a.html %V 297 %X Deep learning has transformed 2D medical imaging, but scaling to 3D volumes remains difficult due to high compute, scarce annotations, and the loss of global context in patch-based pipelines. We present Topoformer, a transformer framework that makes 3D classification both data- and compute-efficient by integrating topological priors. First, we introduce a sliding-band cubical filtration that replaces a single global persistent-homology pass with overlapping intensity bands, yielding an ordered sequence of Betti tokens (components, tunnels, cavities). These tokens act as transformer inputs, enabling multi-scale topological reasoning without early saturation. Second, we propose Topological Supervised Contrastive Learning (TopoSupCon), which treats the image and its label-preserving topological view as complementary modalities, reducing reliance on brittle geometric or generative augmentations. A lightweight TopoGate further lets the image softly weight multiple band widths per case. On 3D brain MRI tumor grading and chest CT benchmarks in low-data regimes, Topoformer achieves consistent gains over strong 3D CNN and ViT baselines, including improvements up to 12 AUC points and 8 accuracy points. Our results show that sequential, topology-aware representations provide a powerful inductive bias for volumetric medical image analysis.
APA
Chakraborty, S., Koung, P. & Coskunuzer, B.. (2026). Topoformer: Topology-Infused Transformers for Medical Imaging. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:23-40 Available from https://proceedings.mlr.press/v297/chakraborty26a.html.

Related Material