Enlightening the Student in Knowledge Distillation | IEEE Conference Publication | IEEE Xplore

Enlightening the Student in Knowledge Distillation


Abstract:

Knowledge distillation is a common method of model compression, which uses large models (teacher networks) to guide the training of small models (student networks). Howev...Show More

Abstract:

Knowledge distillation is a common method of model compression, which uses large models (teacher networks) to guide the training of small models (student networks). However, the student may find a hard time absorbing the knowledge from a sophisticated teacher due to the capacity and confidence gaps between them. To address this issue, a new knowledge distillation and refinement (KDrefine) framework is proposed to enlighten the student by expending and refining its network structure. In addition, a confidence refinement strategy is utilized to generate adaptive soften logits for efficient distillation. The experiments show that the proposed framework outperforms state-of-the-art methods on both CIFAR-100 and Tiny-ImageNet datasets. The code is available at https://github.com/YujieZheng99/KDrefine.
Date of Conference: 04-10 June 2023
Date Added to IEEE Xplore: 05 May 2023
ISBN Information:

ISSN Information:

Conference Location: Rhodes Island, Greece

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.