Multi-view Multi-task Learning for Improving Autonomous Mammogram Diagnosis
Proceedings of the 4th Machine Learning for Healthcare Conference, PMLR 106:571-591, 2019.
The number of women requiring screening and diagnostic mammography is increasing. The recent promise of machine learning on medical images have led to an influx of studies using deep learning for autonomous mammogram diagnosis. We present a novel multi-view multi-task (MVMT) convolutional neural network (CNN) trained to predict the radiological assessments known to be associated with cancer, such as breast density, conspicuity, etc., in addition to cancer diagnosis. We show on full-eld mammograms that multi-task learning has three advantages: 1) learning refined feature representations associated with cancer improves the classification performance of the diagnosis task, 2) issuing radiological assessments provides an additional layer of model interpretability that a radiologist can use to debug and scrutinize the diagnoses provided by the CNN, and 3) improves the radiological workflow by providing automated annotation of radiological reports. Results obtained on a private dataset of over 7,000 patients show that our MVMT network attained an AUROC and AUPRC of 0.855 $\pm$ 0.021 and 0.646 $\pm$ 0.023, respectively, and improved on the performance of other state-of-the-art multi-view CNNs.