Loading [a11y]/accessibility-menu.js
Attention-Guided Knowledge Distillation for Efficient Single-Stage Detector | IEEE Conference Publication | IEEE Xplore

Attention-Guided Knowledge Distillation for Efficient Single-Stage Detector


Abstract:

Knowledge distillation has been successfully applied in image classification for model acceleration. There are also some works employing this technique to object detectio...Show More

Abstract:

Knowledge distillation has been successfully applied in image classification for model acceleration. There are also some works employing this technique to object detection, but they all treat different feature regions equally when performing feature mimic. In this paper, we propose an end-to-end attention-guided knowledge distillation method to train efficient single-stage detectors with much smaller backbones. More specifically, we introduce an attention mechanism to prioritize the transfer of important knowledge by focusing on a sparse set of hard samples, leading to a more thorough distillation process. In addition, the proposed distillation method also provides an easy way to train efficient detectors without tedious ImageNet pre-training procedure. Extensive experiments on PASCAL VOC and CityPersons datasets demonstrate the effectiveness of the proposed approach. We achieve 57.96% and 69.48% mAP on VOC07 with the backbone of 1/8 VGG16 and 1/4 VGG16, greatly outperforming their ImageNet pre-trained counterparts by 11.7% and 7.1% respectively.
Date of Conference: 05-09 July 2021
Date Added to IEEE Xplore: 09 June 2021
ISBN Information:

ISSN Information:

Conference Location: Shenzhen, China

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.