ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging

Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw, Grant Mair, Emanuele Trucco, Amos Storkey
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:10153-10169, 2023.

Abstract

In some medical imaging tasks and other settings where only small parts of the image are informative for the classification task, traditional CNNs can sometimes struggle to generalise. Manually annotated Regions of Interest (ROI) are often used to isolate the most informative parts of the image. However, these are expensive to collect and may vary significantly across annotators. To overcome these issues, we propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales. We refer to our method as Adversarial Counterfactual Attention (ACAT). ACAT increases the baseline classification accuracy of lesions in brain CT scans from $71.39 %$ to $72.55 %$ and of COVID-19 related findings in lung CT scans from $67.71 %$ to $70.84 %$ and exceeds the performance of competing methods. We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images. They are able to isolate the area of interest in brain and lung CT scans without using any manual annotations. In the task of localising the lesion location out of 6 possible regions, they obtain a score of $65.05 %$ on brain CT scans, improving the score of $61.29 %$ obtained with the best competing method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-fontanella23a, title = {{ACAT}: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging}, author = {Fontanella, Alessandro and Antoniou, Antreas and Li, Wenwen and Wardlaw, Joanna and Mair, Grant and Trucco, Emanuele and Storkey, Amos}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {10153--10169}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/fontanella23a/fontanella23a.pdf}, url = {https://proceedings.mlr.press/v202/fontanella23a.html}, abstract = {In some medical imaging tasks and other settings where only small parts of the image are informative for the classification task, traditional CNNs can sometimes struggle to generalise. Manually annotated Regions of Interest (ROI) are often used to isolate the most informative parts of the image. However, these are expensive to collect and may vary significantly across annotators. To overcome these issues, we propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales. We refer to our method as Adversarial Counterfactual Attention (ACAT). ACAT increases the baseline classification accuracy of lesions in brain CT scans from $71.39 %$ to $72.55 %$ and of COVID-19 related findings in lung CT scans from $67.71 %$ to $70.84 %$ and exceeds the performance of competing methods. We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images. They are able to isolate the area of interest in brain and lung CT scans without using any manual annotations. In the task of localising the lesion location out of 6 possible regions, they obtain a score of $65.05 %$ on brain CT scans, improving the score of $61.29 %$ obtained with the best competing method.} }
Endnote
%0 Conference Paper %T ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging %A Alessandro Fontanella %A Antreas Antoniou %A Wenwen Li %A Joanna Wardlaw %A Grant Mair %A Emanuele Trucco %A Amos Storkey %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-fontanella23a %I PMLR %P 10153--10169 %U https://proceedings.mlr.press/v202/fontanella23a.html %V 202 %X In some medical imaging tasks and other settings where only small parts of the image are informative for the classification task, traditional CNNs can sometimes struggle to generalise. Manually annotated Regions of Interest (ROI) are often used to isolate the most informative parts of the image. However, these are expensive to collect and may vary significantly across annotators. To overcome these issues, we propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales. We refer to our method as Adversarial Counterfactual Attention (ACAT). ACAT increases the baseline classification accuracy of lesions in brain CT scans from $71.39 %$ to $72.55 %$ and of COVID-19 related findings in lung CT scans from $67.71 %$ to $70.84 %$ and exceeds the performance of competing methods. We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images. They are able to isolate the area of interest in brain and lung CT scans without using any manual annotations. In the task of localising the lesion location out of 6 possible regions, they obtain a score of $65.05 %$ on brain CT scans, improving the score of $61.29 %$ obtained with the best competing method.
APA
Fontanella, A., Antoniou, A., Li, W., Wardlaw, J., Mair, G., Trucco, E. & Storkey, A.. (2023). ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:10153-10169 Available from https://proceedings.mlr.press/v202/fontanella23a.html.

Related Material