[edit]
Attention Distillation for Detection Transformers: Application to Real-Time Video Object Detection in Ultrasound
Proceedings of Machine Learning for Health, PMLR 158:26-37, 2021.
Abstract
We introduce a method for efficient knowledge distillation of transformer-based object detectors. The proposed “attention distillation” makes use of the self-attention matrices generated within the layers of the state-of-art detection transformer (DETR) model. Localization information from the attention maps of a large teacher network are distilled into smaller student networks capable of running at much higher speeds. We further investigate distilling spatio-temporal information captured by 3D detection transformer networks into 2D object detectors that only process single frames. We apply the approach to the clinically important problem of detecting medical instruments in real-time from ultrasound video sequences, where inference speed is critical on computationally resource-limited hardware. We observe that, via attention distillation, student networks are able to approach the detection performance of larger teacher networks, while meeting strict computational requirements. Experiments demonstrate notable gains in accuracy and speed compared to detection transformer models trained without attention distillation.