A Multi-scale Graph Network with Multi-head Attention for Histopathology Image Diagnosisn

Xiaodan Xing, Yixin Ma, Lei Jin, Tianyang Sun, Zhong Xue, Feng Shi, Jinsong Wu, Dinggang Shen
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 156:227-235, 2021.

Abstract

Hematoxylin-eosin (H&E) staining plays an essential role in brain glioma diagnosis, but reading pathologic images and generating diagnostic reports can be a tedious and laborious work. Pathologists need to combine and navigate extremely large images with different scales and to quantify different aspects for subtyping. In this work, we propose an automatic diagnosis algorithm to identify cell types and severity of H&E slides, in order to classify five major subtypes of glioma from whole slide pathological images. The proposed method is featured by a pyramid graph structure and an attention-based multi-instance learning strategy. We claim that our method not only improve the classification accuracy by utilizing multi-scale information, but also help to identify high risk patches. We summarized patches from multiple resolutions into a graph structure. The nodes of the pyramid graph are feature vectors extracted from image patches, and these vectors are connected by their spatial adjacency. We then fed the graph into the proposed model with self-attention and graph convolutions. Here, we used a multi-head self-attention architecture, where same self-attention blocks are stacked in parallel. As proven in Transformer networks, multiple attention maps herein capture comprehensive activation patterns from different subspace representation. Using the proposed method, the results show a 70% accuracy for glioma subtyping. The multiresolution attention maps generated from the proposed method could help locate proliferations and necrosis in the whole pathologic slide.

Cite this Paper


BibTeX
@InProceedings{pmlr-v156-xing21a, title = {A Multi-scale Graph Network with Multi-head Attention for Histopathology Image Diagnosisn}, author = {Xing, Xiaodan and Ma, Yixin and Jin, Lei and Sun, Tianyang and Xue, Zhong and Shi, Feng and Wu, Jinsong and Shen, Dinggang}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {227--235}, year = {2021}, editor = {Atzori, Manfredo and Burlutskiy, Nikolay and Ciompi, Francesco and Li, Zhang and Minhas, Fayyaz and Müller, Henning and Peng, Tingying and Rajpoot, Nasir and Torben-Nielsen, Ben and van der Laak, Jeroen and Veta, Mitko and Yuan, Yinyin and Zlobec, Inti}, volume = {156}, series = {Proceedings of Machine Learning Research}, month = {27 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v156/xing21a/xing21a.pdf}, url = {https://proceedings.mlr.press/v156/xing21a.html}, abstract = { Hematoxylin-eosin (H&E) staining plays an essential role in brain glioma diagnosis, but reading pathologic images and generating diagnostic reports can be a tedious and laborious work. Pathologists need to combine and navigate extremely large images with different scales and to quantify different aspects for subtyping. In this work, we propose an automatic diagnosis algorithm to identify cell types and severity of H&E slides, in order to classify five major subtypes of glioma from whole slide pathological images. The proposed method is featured by a pyramid graph structure and an attention-based multi-instance learning strategy. We claim that our method not only improve the classification accuracy by utilizing multi-scale information, but also help to identify high risk patches. We summarized patches from multiple resolutions into a graph structure. The nodes of the pyramid graph are feature vectors extracted from image patches, and these vectors are connected by their spatial adjacency. We then fed the graph into the proposed model with self-attention and graph convolutions. Here, we used a multi-head self-attention architecture, where same self-attention blocks are stacked in parallel. As proven in Transformer networks, multiple attention maps herein capture comprehensive activation patterns from different subspace representation. Using the proposed method, the results show a 70% accuracy for glioma subtyping. The multiresolution attention maps generated from the proposed method could help locate proliferations and necrosis in the whole pathologic slide.} }
Endnote
%0 Conference Paper %T A Multi-scale Graph Network with Multi-head Attention for Histopathology Image Diagnosisn %A Xiaodan Xing %A Yixin Ma %A Lei Jin %A Tianyang Sun %A Zhong Xue %A Feng Shi %A Jinsong Wu %A Dinggang Shen %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2021 %E Manfredo Atzori %E Nikolay Burlutskiy %E Francesco Ciompi %E Zhang Li %E Fayyaz Minhas %E Henning Müller %E Tingying Peng %E Nasir Rajpoot %E Ben Torben-Nielsen %E Jeroen van der Laak %E Mitko Veta %E Yinyin Yuan %E Inti Zlobec %F pmlr-v156-xing21a %I PMLR %P 227--235 %U https://proceedings.mlr.press/v156/xing21a.html %V 156 %X Hematoxylin-eosin (H&E) staining plays an essential role in brain glioma diagnosis, but reading pathologic images and generating diagnostic reports can be a tedious and laborious work. Pathologists need to combine and navigate extremely large images with different scales and to quantify different aspects for subtyping. In this work, we propose an automatic diagnosis algorithm to identify cell types and severity of H&E slides, in order to classify five major subtypes of glioma from whole slide pathological images. The proposed method is featured by a pyramid graph structure and an attention-based multi-instance learning strategy. We claim that our method not only improve the classification accuracy by utilizing multi-scale information, but also help to identify high risk patches. We summarized patches from multiple resolutions into a graph structure. The nodes of the pyramid graph are feature vectors extracted from image patches, and these vectors are connected by their spatial adjacency. We then fed the graph into the proposed model with self-attention and graph convolutions. Here, we used a multi-head self-attention architecture, where same self-attention blocks are stacked in parallel. As proven in Transformer networks, multiple attention maps herein capture comprehensive activation patterns from different subspace representation. Using the proposed method, the results show a 70% accuracy for glioma subtyping. The multiresolution attention maps generated from the proposed method could help locate proliferations and necrosis in the whole pathologic slide.
APA
Xing, X., Ma, Y., Jin, L., Sun, T., Xue, Z., Shi, F., Wu, J. & Shen, D.. (2021). A Multi-scale Graph Network with Multi-head Attention for Histopathology Image Diagnosisn. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 156:227-235 Available from https://proceedings.mlr.press/v156/xing21a.html.

Related Material