CARMIL: Context-Aware Regularization on Multiple Instance Learning models for Whole Slide Images

Thiziri Nait Saada, Valentina Di-Proietto, Benoit Schmauch, Katharina Von Loga, Lucas Fidon
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 254:154-169, 2024.

Abstract

Multiple Instance Learning (MIL) models have proven effective for cancer prognosis from Whole Slide Images. However, the original MIL formulation incorrectly assumes the patches of the same image to be independent, leading to a loss of spatial context as information flows through the network. Incorporating contextual knowledge into predictions is particularly important given the inclination for cancerous cells to form clusters and the presence of spatial indicators for tumors. State-of-the-art methods often use attention mechanisms eventually combined with graphs to capture spatial knowledge. In this paper, we take a novel and transversal approach, addressing this issue through the lens of regularization. We propose Context-Aware Regularization for Multiple Instance Learning (CARMIL), a versatile regularization scheme designed to seamlessly integrate spatial knowledge into any MIL model. Additionally, we present a new and generic metric to quantify the Context- Awareness of any MIL model when applied to Whole Slide Images, resolving a previously unexplored gap in the field. The efficacy of our framework is evaluated for two survival analysis tasks on glioblastoma (TCGA GBM) and colon cancer data (TCGA COAD).

Cite this Paper


BibTeX
@InProceedings{pmlr-v254-saada24a, title = {CARMIL: Context-Aware Regularization on Multiple Instance Learning models for Whole Slide Images}, author = {Saada, Thiziri Nait and Di-Proietto, Valentina and Schmauch, Benoit and Loga, Katharina Von and Fidon, Lucas}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {154--169}, year = {2024}, editor = {Ciompi, Francesco and Khalili, Nadieh and Studer, Linda and Poceviciute, Milda and Khan, Amjad and Veta, Mitko and Jiao, Yiping and Haj-Hosseini, Neda and Chen, Hao and Raza, Shan and Minhas, FayyazZlobec, Inti and Burlutskiy, Nikolay and Vilaplana, Veronica and Brattoli, Biagio and Muller, Henning and Atzori, Manfredo and Raza, Shan and Minhas, Fayyaz}, volume = {254}, series = {Proceedings of Machine Learning Research}, month = {06 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v254/main/assets/saada24a/saada24a.pdf}, url = {https://proceedings.mlr.press/v254/saada24a.html}, abstract = {Multiple Instance Learning (MIL) models have proven effective for cancer prognosis from Whole Slide Images. However, the original MIL formulation incorrectly assumes the patches of the same image to be independent, leading to a loss of spatial context as information flows through the network. Incorporating contextual knowledge into predictions is particularly important given the inclination for cancerous cells to form clusters and the presence of spatial indicators for tumors. State-of-the-art methods often use attention mechanisms eventually combined with graphs to capture spatial knowledge. In this paper, we take a novel and transversal approach, addressing this issue through the lens of regularization. We propose Context-Aware Regularization for Multiple Instance Learning (CARMIL), a versatile regularization scheme designed to seamlessly integrate spatial knowledge into any MIL model. Additionally, we present a new and generic metric to quantify the Context- Awareness of any MIL model when applied to Whole Slide Images, resolving a previously unexplored gap in the field. The efficacy of our framework is evaluated for two survival analysis tasks on glioblastoma (TCGA GBM) and colon cancer data (TCGA COAD).} }
Endnote
%0 Conference Paper %T CARMIL: Context-Aware Regularization on Multiple Instance Learning models for Whole Slide Images %A Thiziri Nait Saada %A Valentina Di-Proietto %A Benoit Schmauch %A Katharina Von Loga %A Lucas Fidon %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2024 %E Francesco Ciompi %E Nadieh Khalili %E Linda Studer %E Milda Poceviciute %E Amjad Khan %E Mitko Veta %E Yiping Jiao %E Neda Haj-Hosseini %E Hao Chen %E Shan Raza %E Fayyaz MinhasInti Zlobec %E Nikolay Burlutskiy %E Veronica Vilaplana %E Biagio Brattoli %E Henning Muller %E Manfredo Atzori %E Shan Raza %E Fayyaz Minhas %F pmlr-v254-saada24a %I PMLR %P 154--169 %U https://proceedings.mlr.press/v254/saada24a.html %V 254 %X Multiple Instance Learning (MIL) models have proven effective for cancer prognosis from Whole Slide Images. However, the original MIL formulation incorrectly assumes the patches of the same image to be independent, leading to a loss of spatial context as information flows through the network. Incorporating contextual knowledge into predictions is particularly important given the inclination for cancerous cells to form clusters and the presence of spatial indicators for tumors. State-of-the-art methods often use attention mechanisms eventually combined with graphs to capture spatial knowledge. In this paper, we take a novel and transversal approach, addressing this issue through the lens of regularization. We propose Context-Aware Regularization for Multiple Instance Learning (CARMIL), a versatile regularization scheme designed to seamlessly integrate spatial knowledge into any MIL model. Additionally, we present a new and generic metric to quantify the Context- Awareness of any MIL model when applied to Whole Slide Images, resolving a previously unexplored gap in the field. The efficacy of our framework is evaluated for two survival analysis tasks on glioblastoma (TCGA GBM) and colon cancer data (TCGA COAD).
APA
Saada, T.N., Di-Proietto, V., Schmauch, B., Loga, K.V. & Fidon, L.. (2024). CARMIL: Context-Aware Regularization on Multiple Instance Learning models for Whole Slide Images. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 254:154-169 Available from https://proceedings.mlr.press/v254/saada24a.html.

Related Material