Abstract
Knowledge distillation (KD) is a technique of transferring the knowledge from a large teacher network to a small student network. Current KD methods either make a student mimic diverse teachers with knowledge amalgamation or encourage many students to do mutual/self learning free from the supervision of the teacher. Intuitively, it could be not optimal to focus on teacher diversity but ignore the teacher-student gap, or spotlight student co-learning without the guidence of the teacher. Besides, such methods mainly rely on distilling deep features from intermediate layers, thus pure logit distillation is still fully underexplored. In this paper, we propose a neat yet effective logit distillation model termed student diversity, that is, many students mimic a teacher with logit distillation, then exploit individual knowledge to collectively train a single excellent student with logit distillation again. For this aim, a multi-branch shared network as diverse students is developed to grasp the knowledge of the teacher in different degrees. Since such students share different levels of network layers, they have different yet homogeneous knowledge to pave the reliable way for bridging the teacher-student gap. To collectively train an excellent student, we fuse the semantics of all the students to pay more attention to attentive features for effective knowledge transfer. We have conducted extensive experiments on various datasets to demonstrate the effectiveness of our approach.
This work was funded by Haihe Laboratory in Tianjin, Grants No. 22HHXCJC00007.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahn, S., Hu, S.X., Damianou, A., Lawrence, N.D., Dai, Z.: Variational information distillation for knowledge transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9163–9171 (2019)
Chen, D., et al.: Cross-layer distillation with semantic calibration. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 7028–7036 (2021)
Coates, A., Ng, A.Y., Lee, H.: An analysis of single-layer networks in unsupervised feature learning. JMLR Proc. 15, 215–223 (2011)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Gao, L., Lan, X., Mi, H., Feng, D., Xu, K., Peng, Y.: Multistructure-based collaborative online distillation. Entropy 21, 357 (2019)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)
Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural network. ArXiv abs/1506.02626 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. ArXiv abs/1503.02531 (2015)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Handb. Syst. Autoimmun. Dis. 1(4) (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)
Lan, X., Zhu, X., Gong, S.: Knowledge distillation by on-the-fly native ensemble. In: NeurIPS (2018)
Liu, L., et al.: Exploring inter-channel correlation for diversity-preserved knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8271–8280 (2021)
Liu, Z., Liu, Y., Huang, C.: Semi-online knowledge distillation. ArXiv abs/2111.11747 (2021)
Luan, Y., Zhao, H., Yang, Z., Dai, Y.: MSD: multi-self-distillation learning via multi-classifiers within deep neural networks. CoRR abs/1911.09418 (2019)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11), 2579–2605 (2008)
Malik, S.M., Mohbat, F., Haider, M.U., Rasheed, M.M., Taj, M.: Teacher-class network: neural network compression mechanism. In: 32nd British Machine Vision Conference 2021, BMVC 2021, Online, 22–25 November 2021, p. 58 (2021)
Peng, B., et al.: Correlation congruence for knowledge distillation. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5006–5015 (2019)
Polino, A., Pascanu, R., Alistarh, D.: Model compression via distillation and quantization. ArXiv abs/1802.05668 (2018)
Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: FitNets: hints for thin deep nets. CoRR abs/1412.6550 (2015)
Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.C.: Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation. ArXiv abs/1801.04381 (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2015)
Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 1958–1970 (2008)
Tung, F., Mori, G.: Similarity-preserving knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1365–1374 (2019)
Xu, G., Liu, Z., Li, X., Loy, C.C.: Knowledge distillation meets self-supervision. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 588–604. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_34
Yang, X., Feng, F., Ji, W., Wang, M., Chua, T.S.: Deconfounded video moment retrieval with causal intervention. In: SIGIR (2021)
Yang, X., et al.: Interpretable fashion matching with rich attributes. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 775–784 (2019)
Yang, X., Wang, S., Dong, J., Dong, J., Wang, M., Chua, T.S.: Video moment retrieval with cross-modal neural architecture search. TIP 31, 1204–1216 (2022)
Yang, X., Zhou, P., Wang, M.: Person reidentification via structural deep metric learning. IEEE Trans. Neural Netw. Learn. Syst. 30(10), 2987–2998 (2018)
You, S., Xu, C., Xu, C., Tao, D.: Learning with single-teacher multi-student. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 4390–4397 (2018)
Zagoruyko, S., Komodakis, N.: Wide residual networks. ArXiv abs/1605.07146 (2016)
Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. ArXiv abs/1612.03928 (2017)
Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856 (2018)
Zhang, Y., Xiang, T., Hospedales, T.M., Lu, H.: Deep mutual learning. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4320–4328 (2018)
Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, D., Lan, L., Wang, M., Zhang, X., Liang, T., Luo, Z. (2023). Logit Distillation via Student Diversity. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1792. Springer, Singapore. https://doi.org/10.1007/978-981-99-1642-9_29
Download citation
DOI: https://doi.org/10.1007/978-981-99-1642-9_29
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1641-2
Online ISBN: 978-981-99-1642-9
eBook Packages: Computer ScienceComputer Science (R0)