Multi-view Multi-task Learning for Improving Autonomous Mammogram Diagnosis

Trent Kyono, Fiona J. Gilbert, Mihaela Schaar
Proceedings of the 4th Machine Learning for Healthcare Conference, PMLR 106:571-591, 2019.

Abstract

The number of women requiring screening and diagnostic mammography is increasing. The recent promise of machine learning on medical images have led to an influx of studies using deep learning for autonomous mammogram diagnosis. We present a novel multi-view multi-task (MVMT) convolutional neural network (CNN) trained to predict the radiological assessments known to be associated with cancer, such as breast density, conspicuity, etc., in addition to cancer diagnosis. We show on full-eld mammograms that multi-task learning has three advantages: 1) learning refined feature representations associated with cancer improves the classification performance of the diagnosis task, 2) issuing radiological assessments provides an additional layer of model interpretability that a radiologist can use to debug and scrutinize the diagnoses provided by the CNN, and 3) improves the radiological workflow by providing automated annotation of radiological reports. Results obtained on a private dataset of over 7,000 patients show that our MVMT network attained an AUROC and AUPRC of 0.855 $\pm$ 0.021 and 0.646 $\pm$ 0.023, respectively, and improved on the performance of other state-of-the-art multi-view CNNs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v106-kyono19a, title = {Multi-view Multi-task Learning for Improving Autonomous Mammogram Diagnosis}, author = {Kyono, Trent and Gilbert, Fiona J. and van der Schaar, Mihaela}, booktitle = {Proceedings of the 4th Machine Learning for Healthcare Conference}, pages = {571--591}, year = {2019}, editor = {Doshi-Velez, Finale and Fackler, Jim and Jung, Ken and Kale, David and Ranganath, Rajesh and Wallace, Byron and Wiens, Jenna}, volume = {106}, series = {Proceedings of Machine Learning Research}, month = {09--10 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v106/kyono19a/kyono19a.pdf}, url = {https://proceedings.mlr.press/v106/kyono19a.html}, abstract = {The number of women requiring screening and diagnostic mammography is increasing. The recent promise of machine learning on medical images have led to an influx of studies using deep learning for autonomous mammogram diagnosis. We present a novel multi-view multi-task (MVMT) convolutional neural network (CNN) trained to predict the radiological assessments known to be associated with cancer, such as breast density, conspicuity, etc., in addition to cancer diagnosis. We show on full-eld mammograms that multi-task learning has three advantages: 1) learning refined feature representations associated with cancer improves the classification performance of the diagnosis task, 2) issuing radiological assessments provides an additional layer of model interpretability that a radiologist can use to debug and scrutinize the diagnoses provided by the CNN, and 3) improves the radiological workflow by providing automated annotation of radiological reports. Results obtained on a private dataset of over 7,000 patients show that our MVMT network attained an AUROC and AUPRC of 0.855 $\pm$ 0.021 and 0.646 $\pm$ 0.023, respectively, and improved on the performance of other state-of-the-art multi-view CNNs.} }
Endnote
%0 Conference Paper %T Multi-view Multi-task Learning for Improving Autonomous Mammogram Diagnosis %A Trent Kyono %A Fiona J. Gilbert %A Mihaela Schaar %B Proceedings of the 4th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2019 %E Finale Doshi-Velez %E Jim Fackler %E Ken Jung %E David Kale %E Rajesh Ranganath %E Byron Wallace %E Jenna Wiens %F pmlr-v106-kyono19a %I PMLR %P 571--591 %U https://proceedings.mlr.press/v106/kyono19a.html %V 106 %X The number of women requiring screening and diagnostic mammography is increasing. The recent promise of machine learning on medical images have led to an influx of studies using deep learning for autonomous mammogram diagnosis. We present a novel multi-view multi-task (MVMT) convolutional neural network (CNN) trained to predict the radiological assessments known to be associated with cancer, such as breast density, conspicuity, etc., in addition to cancer diagnosis. We show on full-eld mammograms that multi-task learning has three advantages: 1) learning refined feature representations associated with cancer improves the classification performance of the diagnosis task, 2) issuing radiological assessments provides an additional layer of model interpretability that a radiologist can use to debug and scrutinize the diagnoses provided by the CNN, and 3) improves the radiological workflow by providing automated annotation of radiological reports. Results obtained on a private dataset of over 7,000 patients show that our MVMT network attained an AUROC and AUPRC of 0.855 $\pm$ 0.021 and 0.646 $\pm$ 0.023, respectively, and improved on the performance of other state-of-the-art multi-view CNNs.
APA
Kyono, T., Gilbert, F.J. & Schaar, M.. (2019). Multi-view Multi-task Learning for Improving Autonomous Mammogram Diagnosis. Proceedings of the 4th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 106:571-591 Available from https://proceedings.mlr.press/v106/kyono19a.html.

Related Material