Abstract
In knowledge distillation, numerous methods devote to exploring effective knowledge to guide the training of the small student network. However, these approaches ignore inspiring the student network’s own capability, a small student network also has the potential to achieve comparable performance to a large teacher network. We propose a new framework named stimulates the potential for knowledge distillation (SPKD). The SPKD framework consists of two components, 1) residual-based local feature normalization (LFNR), and 2) the local feature normalized extraction (LFNE). LFNR can enhance the competitiveness of local areas of feature maps by adding to the student network and can make better use of local areas with rich information. On the other hand, the stimulated local features are more expressive. LFNE extracts local representational features from the teacher network; the obtained local features are transferred to the student to guide the student network learning. Extensive experimental results demonstrate that our SPKD has achieved significant classification results on the benchmark datasets CIFAR-10 and CIFAR-100.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Smith, J., et al.: Always be dreaming: a new approach for data-free class-incremental learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Kim, K., et al.: Self-knowledge distillation: A simple way for better generalization. arXiv preprint arXiv:2006.12000 (2020)
Park, W., et al.: Relational knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)
Zhang, Y., et al.: Deep mutual learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Zagoruyko, S., Komodakis, N.: Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928 (2016)
Peng, B., et al.: Correlation congruence for knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)
Jiang, N., Tang, J., Yu, W., Zhou, J.: Local feature normalization. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, S.-Y. (eds.) KSEM 2021, Part II. LNCS (LNAI), vol. 12816, pp. 228–239. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82147-0_19
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny
Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 2. IEEE (2006). Images (2009)
Passalis, N., Tefas, A.: Probabilistic knowledge transfer for deep representation learning. CoRR, abs/1803.10837 (2018)
Tung, F., Mori, G.: Similarity-preserving knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, pp. 1097–1105 (2012)
Huang, Z., Wang, N.: Like what you like: Knowledge distill via neuron selectivity transfer. arXiv preprint arXiv:1707.01219 (2017)
Tian, Y., Krishnan, D., Isola, P.: Contrastive representation distillation. arXiv preprint arXiv:1910.10699 (2019)
Acknowledgement
This research is supported by Sichuan Science and Technology Program (No. 2022YFG0324), SWUST Doctoral Foundation under Grant 19zx7102.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Qing, H., Tang, J., Yang, X., Huang, X., Zhu, H., Jiang, N. (2022). Stimulates Potential for Knowledge Distillation. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13532. Springer, Cham. https://doi.org/10.1007/978-3-031-15937-4_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-15937-4_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15936-7
Online ISBN: 978-3-031-15937-4
eBook Packages: Computer ScienceComputer Science (R0)