Improving the Ability of Deep Neural Networks to Use Information from Multiple Views in Breast Cancer Screening

Nan Wu, Stanisław Jastrzębski, Jungkyu Park, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
Proceedings of the Third Conference on Medical Imaging with Deep Learning, PMLR 121:827-842, 2020.

Abstract

In breast cancer screening, radiologists make the diagnosis based on images that are taken from two angles. Inspired by this, we seek to improve the performance of deep neural networks applied to this task by encouraging the model to use information from both views of the breast. First, we took a closer look at the training process and observed an imbalance between learning from the two views. In particular, we observed that layers processing one of the views have parameters with larger gradients in magnitude, and contribute more to the overall loss reduction. Next, we tested several methods targeted at utilizing both views more equally in training. We found that using the same weights to process both views, or using modality dropout, leads to a boost in performance. Looking forward, our results indicate improving learning dynamics as a promising avenue for improving utilization of multiple views in deep neural networks for medical diagnosis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v121-wu20a, title = {Improving the Ability of Deep Neural Networks to Use Information from Multiple Views in Breast Cancer Screening}, author = {Wu, Nan and Jastrz\k{e}bski, Stanis\l{}aw and Park, Jungkyu and Moy, Linda and Cho, Kyunghyun and Geras, Krzysztof J.}, booktitle = {Proceedings of the Third Conference on Medical Imaging with Deep Learning}, pages = {827--842}, year = {2020}, editor = {Arbel, Tal and Ben Ayed, Ismail and de Bruijne, Marleen and Descoteaux, Maxime and Lombaert, Herve and Pal, Christopher}, volume = {121}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v121/wu20a/wu20a.pdf}, url = {https://proceedings.mlr.press/v121/wu20a.html}, abstract = {In breast cancer screening, radiologists make the diagnosis based on images that are taken from two angles. Inspired by this, we seek to improve the performance of deep neural networks applied to this task by encouraging the model to use information from both views of the breast. First, we took a closer look at the training process and observed an imbalance between learning from the two views. In particular, we observed that layers processing one of the views have parameters with larger gradients in magnitude, and contribute more to the overall loss reduction. Next, we tested several methods targeted at utilizing both views more equally in training. We found that using the same weights to process both views, or using modality dropout, leads to a boost in performance. Looking forward, our results indicate improving learning dynamics as a promising avenue for improving utilization of multiple views in deep neural networks for medical diagnosis.} }
Endnote
%0 Conference Paper %T Improving the Ability of Deep Neural Networks to Use Information from Multiple Views in Breast Cancer Screening %A Nan Wu %A Stanisław Jastrzębski %A Jungkyu Park %A Linda Moy %A Kyunghyun Cho %A Krzysztof J. Geras %B Proceedings of the Third Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2020 %E Tal Arbel %E Ismail Ben Ayed %E Marleen de Bruijne %E Maxime Descoteaux %E Herve Lombaert %E Christopher Pal %F pmlr-v121-wu20a %I PMLR %P 827--842 %U https://proceedings.mlr.press/v121/wu20a.html %V 121 %X In breast cancer screening, radiologists make the diagnosis based on images that are taken from two angles. Inspired by this, we seek to improve the performance of deep neural networks applied to this task by encouraging the model to use information from both views of the breast. First, we took a closer look at the training process and observed an imbalance between learning from the two views. In particular, we observed that layers processing one of the views have parameters with larger gradients in magnitude, and contribute more to the overall loss reduction. Next, we tested several methods targeted at utilizing both views more equally in training. We found that using the same weights to process both views, or using modality dropout, leads to a boost in performance. Looking forward, our results indicate improving learning dynamics as a promising avenue for improving utilization of multiple views in deep neural networks for medical diagnosis.
APA
Wu, N., Jastrzębski, S., Park, J., Moy, L., Cho, K. & Geras, K.J.. (2020). Improving the Ability of Deep Neural Networks to Use Information from Multiple Views in Breast Cancer Screening. Proceedings of the Third Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 121:827-842 Available from https://proceedings.mlr.press/v121/wu20a.html.

Related Material