Abstract
Knowledge Distillation (KD) methods are widely adopted to reduce the high computational and memory costs incurred by large-scale pre-trained models. However, there are currently no researchers focusing on KD’s application for relation classification. Although directly leveraging traditional KD methods for relation classification is the easiest way, it should not be neglected that the concept of “relation” is highly ambiguous so machine learning models are likely to give uncertain predictions of relations. Moreover, the label smoothing progress in KD would result in further uncertainty in supervision, leading to bad student model performances. In this work, we propose a confusion-based KD method through which the uncertainty in supervision can be adaptively adjusted based on how confused teacher models are in relation classification. In addition, we propose a new knowledge adjustment method called logit replacement, which can adaptively fix teachers’ mistakes to avoid genetic errors. We conducted comprehensive experiments on the basis of the SemEval-2010 Task 8 relation classification benchmark. Test results demonstrate the effectiveness of the proposed methods.
Supported in part by the Major Project of Philosophy and Social Science Research in Jiangsu Universities of China (2020SJZDA102).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ba, J., Caruana, R.: Do deep nets really need to be deep? Adv. Neural Inf. Process. Syst. 27 (2014)
Cai, R., Zhang, X., Wang, H.: Bidirectional recurrent convolutional neural network for relation classification. In: Proceedings of ACL (2016)
Chen, W.C., Chang, C.C., Lee, C.R.: Knowledge distillation with feature maps for image classification. In: Proceedings of ACCV (2018)
Chen, Y.C., Gan, Z., Cheng, Y., Liu, J., Liu, J.: Distilling knowledge learned in BERT for text generation. arXiv preprint arXiv:1911.03829 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vision 129(6), 1789–1819 (2021). https://doi.org/10.1007/s11263-021-01453-z
Hahn, S., Choi, H.: Self-knowledge distillation in natural language processing. arXiv preprint arXiv:1908.01851 (2019)
Hendrickx, I., et al.: SemEval-2010 Task 8: multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422 (2019)
Hinton, G., Vinyals, O., Dean, J., et al.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 2(7) (2015)
Jiao, X., et al.: TinyBERT: distilling BERT for natural language understanding. arXiv preprint arXiv:1909.10351 (2019)
Lee, J., Seo, S., Choi, Y.S.: Semantic relation classification via bidirectional LSTM networks with entity-aware attention using latent entity typing. Symmetry 11(6), 785 (2019)
Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2017)
Liu, J., Chen, Y., Liu, K.: Exploiting the ground-truth: an adversarial imitation based knowledge distillation approach for event detection. In: Proceedings of AAAI (2019)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32 (2019)
Pawar, S., Palshikar, G.K., Bhattacharyya, P.: Relation extraction : a survey. arXiv preprint arXiv:1712.05191 (2017)
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)
Santos, C.N.d., Xiang, B., Zhou, B.: Classifying relations by ranking with convolutional neural networks. arXiv preprint arXiv:1504.06580 (2015)
Sau, B.B., Balasubramanian, V.N.: Deep model compression: distilling knowledge from noisy teachers. arXiv preprint arXiv:1610.09650 (2016)
Shen, Y., Huang, X.J.: Attention-based convolutional neural network for semantic relation extraction. In: Proceedings of COLING (2016)
Sun, S., Cheng, Y., Gan, Z., Liu, J.: Patient knowledge distillation for BERT model compression. arXiv preprint arXiv:1908.09355 (2019)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of CVPR (2016)
Tang, R., Lu, Y., Liu, L., Mou, L., Vechtomova, O., Lin, J.: Distilling task-specific knowledge from BERT into simple neural networks. arXiv preprint arXiv:1903.12136 (2019)
Wang, X., Fu, T., Liao, S., Wang, S., Lei, Z., Mei, T.: Exclusivity-consistency regularized knowledge distillation for face recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 325–342. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_20
Wei, Z., Su, J., Wang, Y., Tian, Y., Chang, Y.: A novel cascade binary tagging framework for relational triple extraction. arXiv preprint arXiv:1909.03227 (2019)
Wen, T., Lai, S., Qian, X.: Preparing lessons: improve knowledge distillation with better supervision. Neurocomputing 454, 25–33 (2021)
Wu, S., He, Y.: Enriching pre-trained language model with entity information for relation classification. In: Proceedings of CIKM (2019)
Yang, Z., et al.: TextBrewer: an open-source knowledge distillation toolkit for natural language processing. In: Proceedings of ACL, pp. 9–16 (2020)
Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network. In: Proceedings of COLING (2014)
Zhou, P., et al.: Attention-based bidirectional long short-term memory networks for relation classification. In: Proceedings of ACL (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
He, H., Ren, Y., Li, Z., Xue, J. (2022). Adaptive Knowledge Distillation for Efficient Relation Classification. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13530. Springer, Cham. https://doi.org/10.1007/978-3-031-15931-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-15931-2_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15930-5
Online ISBN: 978-3-031-15931-2
eBook Packages: Computer ScienceComputer Science (R0)