Linear Attention-based Multiple Instance Learning for Computational Pathology

Charlotte Richter, Daniel Reisenbüchler, Nadine S. Schaadt, Friedrich Feuerhake, Dorit Merhof
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 316:86-96, 2026.

Abstract

Deep learning–based analysis of gigapixel whole slide images (WSIs) in computational pathology (CPath) typically relies on patch-level feature extraction and instance aggregation, with attention-based contextualization at the core of state-of-the-art methods. However, scalability is a major challenge due to the vast number of patches. Therefore, we introduce linear attention based multiple-instance learning (Lin-MIL), which transposes and interchanges the calculations of queries, keys, and values in the attention mechanism. By leveraging linear attention, Lin-MIL reduces computational complexity from O(n^2d) to O(nd^2), compared to vanilla self-attention. Despite this efficiency gain, LinMIL outperforms 12 baseline methods across biomarker, mutation, and tumor classification benchmarks, while also demonstrating robust out-of-domain performance. Moreover, its qualitative attention maps highlight diagnostically relevant regions. In summary, Lin-MIL provides increased performance as well as enhanced scalability and interpretability for a range of computational pathology tasks. Code available at https://github.com/charlotterchtr/Lin-MIL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v316-richter26a, title = {Linear Attention-based Multiple Instance Learning for Computational Pathology}, author = {Richter, Charlotte and Reisenb\"{u}chler, Daniel and Schaadt, Nadine S. and Feuerhake, Friedrich and Merhof, Dorit}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {86--96}, year = {2026}, editor = {Studer, Linda and Ciompi, Francesco and Khalili, Nadieh and Faryna, Khrystyna and Faryna, Khrystyna and Yeong, Joe and Lau, Mai Chan and Chen, Hao and Liu, Ziyi and Brattoli, Biagio}, volume = {316}, series = {Proceedings of Machine Learning Research}, month = {27 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v316/main/assets/richter26a/richter26a.pdf}, url = {https://proceedings.mlr.press/v316/richter26a.html}, abstract = {Deep learning–based analysis of gigapixel whole slide images (WSIs) in computational pathology (CPath) typically relies on patch-level feature extraction and instance aggregation, with attention-based contextualization at the core of state-of-the-art methods. However, scalability is a major challenge due to the vast number of patches. Therefore, we introduce linear attention based multiple-instance learning (Lin-MIL), which transposes and interchanges the calculations of queries, keys, and values in the attention mechanism. By leveraging linear attention, Lin-MIL reduces computational complexity from O(n^2d) to O(nd^2), compared to vanilla self-attention. Despite this efficiency gain, LinMIL outperforms 12 baseline methods across biomarker, mutation, and tumor classification benchmarks, while also demonstrating robust out-of-domain performance. Moreover, its qualitative attention maps highlight diagnostically relevant regions. In summary, Lin-MIL provides increased performance as well as enhanced scalability and interpretability for a range of computational pathology tasks. Code available at https://github.com/charlotterchtr/Lin-MIL.} }
Endnote
%0 Conference Paper %T Linear Attention-based Multiple Instance Learning for Computational Pathology %A Charlotte Richter %A Daniel Reisenbüchler %A Nadine S. Schaadt %A Friedrich Feuerhake %A Dorit Merhof %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2026 %E Linda Studer %E Francesco Ciompi %E Nadieh Khalili %E Khrystyna Faryna %E Khrystyna Faryna %E Joe Yeong %E Mai Chan Lau %E Hao Chen %E Ziyi Liu %E Biagio Brattoli %F pmlr-v316-richter26a %I PMLR %P 86--96 %U https://proceedings.mlr.press/v316/richter26a.html %V 316 %X Deep learning–based analysis of gigapixel whole slide images (WSIs) in computational pathology (CPath) typically relies on patch-level feature extraction and instance aggregation, with attention-based contextualization at the core of state-of-the-art methods. However, scalability is a major challenge due to the vast number of patches. Therefore, we introduce linear attention based multiple-instance learning (Lin-MIL), which transposes and interchanges the calculations of queries, keys, and values in the attention mechanism. By leveraging linear attention, Lin-MIL reduces computational complexity from O(n^2d) to O(nd^2), compared to vanilla self-attention. Despite this efficiency gain, LinMIL outperforms 12 baseline methods across biomarker, mutation, and tumor classification benchmarks, while also demonstrating robust out-of-domain performance. Moreover, its qualitative attention maps highlight diagnostically relevant regions. In summary, Lin-MIL provides increased performance as well as enhanced scalability and interpretability for a range of computational pathology tasks. Code available at https://github.com/charlotterchtr/Lin-MIL.
APA
Richter, C., Reisenbüchler, D., Schaadt, N.S., Feuerhake, F. & Merhof, D.. (2026). Linear Attention-based Multiple Instance Learning for Computational Pathology. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 316:86-96 Available from https://proceedings.mlr.press/v316/richter26a.html.

Related Material