Abstract
In the text mining area, prevalent deep learning models primarily focus on mapping input features to result of predicted outputs, which exhibit a deficiency in self-dialectical thinking process. Inspired by self-reflective mechanisms in human cognition, we propose a hypothesis that existing models emulate decision-making processes and automatically rectify erroneous predictions. The Self-adaptive Reflection Enhanced pre-trained deep learning Model (S-REM) is introduced to validate our hypotheses and to determine the types of knowledge that warrant reproduction. Based on the pretrained-model, S-REM introduces the local explanation for pseudo-label and the global explanation for all labels as the explanation knowledge. The keyword knowledge from TF-IDF model is also integrated to form a reflection knowledge. Based on the key explanation features, the pretrained-model reflects on the initial decision by two reflection methods and optimizes the prediction of deep learning models. Experiments with local and global reflection variants of S-REM on two text mining tasks across four datasets, encompassing three public and one private dataset were conducted. The outcomes demonstrate the efficacy of our method in improving the accuracy of state-of-the-art deep learning models. Furthermore, the method can serve as a foundational step towards developing explainable through integration with various deep learning models.







Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Data Availability
The data and code can be shared via email once the work is published.
References
Bellagente M, Brack M, Teufel H, et al. Multifusion: fusing pre-trained models for multi-lingual, multi-modal image generation. Adv Neural Inf Process Syst. 2024;36.
Dai R. Text Data mining algorithm combining CNN and DBM models. Mob Inf Syst. 2021;2021:1–7.
Sajda P, Philiastides MG, Parra LC. Single-trial analysis of neuroimaging data: inferring neural networks underlying perceptual decision-making in the human brain. IEEE Rev Biomed Eng. 2009;2:97–109.
Akhtar N, Jalwana MAAK. Towards credible visual model interpretation with path attribution[C]//International Conference on Machine Learning. PMLR. 2023;439–457.
Lewis PR, Sarkadi Ş. Reflective artificial intelligence. Mind Mach. 2024;34(2):1–30.
Campbell GE, Bolton AE. Fitting human data with fast, frugal, and computable models of decision-making. InProceedings of the Human Factors and Ergonomics Society Annual Meeting 2003 Oct (Vol. 47, No. 3, pp. 325–329). Sage CA: Los Angeles, CA: SAGE Publications.
Kim B, Park J, Suh J. Transparency and accountability in AI decision support: explaining and visualizing convolutional neural networks for text information. Decis Support Syst. 2020;134:113302.
Cao M, Stewart A, Leonard NE. Integrating human and robot decision-making dynamics with feedback: models and convergence analysis. In2008 47th IEEE Conference on Decision and Control. IEEE. 2008;1127–1132.
Hu Z, Shao M, Liu H, Mi J. Cognitive computing and rule extraction in generalized one-sided formal contexts. Cogn Comput. 2022;14(6):2087–107.
Zuo G, Pan T, Zhang T, Yang Y. SOAR improved artificial neural network for multistep decision-making tasks. Cogn Comput. 2021;13:612–25.
Young T, Hazarika D, Poria S, Cambria E. Recent trends in deep learning based natural language processing. IEEE Comput Intell Mag. 2018;13(3):55–75.
Hilzensauer W. Theoretische Zugänge und Methoden zur Reflexion des Lernens. Ein Diskussionsbeitrag. Bildungsforschung. 2008;2.
Leary MR. The curse of the self: self-awareness, egotism, and the quality of human life. Oxford University Press; 2007.
Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?” Explaining the predictions of any classifier. InProceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016;1135–1144.
Wang Q, Mao Z, Wang B, Guo L. Knowledge graph embedding: a survey of approaches and applications. IEEE Trans Knowl Data Eng. 2017;29(12):2724–43.
Dettmers T, Minervini P, Stenetorp P, Riedel S. Convolutional 2d knowledge graph embeddings. Proc AAAI Conf Artif Intell. 2018;32(1).
Quinn CJ, Kiyavash N, Coleman TP. Directed information graphs. IEEE Trans Inf Theory. 2015;61(12):6887–909.
Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. J Big Data. 2016;3(1):1–40.
Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2009;22(10):1345–59.
Nguyen BH, Xue B, Andreae P, Zhang M. A hybrid evolutionary computation approach to inducing transfer classifiers for domain adaptation. IEEE Trans Cybern. 2020;51(12):6319–32.
Zhao H, Sun X, Dong J, Chen C, Dong Z. Highlight every step: knowledge distillation via collaborative teaching. IEEE Trans Cybern. 2020;52(4):2070–81.
Zhang J, Chen B, Zhang L, Ke X, Ding H. Neural, symbolic and neural-symbolic reasoning on knowledge graphs. AI Open. 2021;2:14–35.
Hooker JN. A quantitative approach to logical inference. Decis Support Syst. 1988;4(1):45–69.
Deng H. Interpreting tree ensembles with inTrees. Int J Data Sci Anal. 2019;7(4):277–87.
Mashayekhi M, Gras R. Rule extraction from random forest: the RF+ HC methods. InAdvances in Artificial Intelligence: 28th Canadian Conference on Artificial Intelligence, Canadian AI 2015, Halifax, Nova Scotia, Canada, June 2–5, 2015, Proceedings 28 2015 (pp. 223–237). Springer International Publishing. https://doi.org/10.1007/978-3-319-18356-5_20.
Puri N, Gupta P, Agarwal P, Verma S, Krishnamurthy B. Magix: Model agnostic globally interpretable explanations. arXiv preprint arXiv:1706.07160. 2017 Jun 22. https://doi.org/10.48550/arXiv.1706.07160.
Yang C, Rangarajan A, Ranka S. Global model interpretation via recursive partitioning. In2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). IEEE. 2018;1563–1570. https://doi.org/10.1109/HPCC/SmartCity/DSS.2018.00256.
Yuan H, Chen Y, Hu X, Ji S. Interpreting deep models for text analysis via optimization and regularization methods. Proc AAAI Conf Artif Intell. 2019;33(01):5717–24.
Mahendran A, Vedaldi A. Understanding deep image representations by inverting them. Proc IEEE Conf Comput Vision Pattern Recogn. 2015;5188–5196.
Dosovitskiy A, Brox T. Inverting visual representations with convolutional networks. InProceedings of the IEEE conference on computer vision and pattern recognition. 2016;4829–4837.
Guidotti R, Monreale A, Ruggieri S, Pedreschi D, Turini F, Giannotti F. Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820. 2018 May 28. https://doi.org/10.48550/arXiv.1805.10820.
Ribeiro MT, Singh S, Guestrin C. Anchors: High-precision model-agnostic explanations. Proc AAAI Conf Artif Intell. 2018;32(1). https://doi.org/10.1609/aaai.v32i1.11491.
Liu L, Wang L. What has my classifier learned? visualizing the classification rules of bag-of-feature model by support region detection. 2012 IEEE Conf Comput Vision Pattern Recogn IEEE. 2012;3586–3593.
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: visual explanations from deep networks via gradient-based localization. Proc IEEE Int Conf Comput Vision. 2017;618–626.
Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Adv Neural Inf Process Syst. 2017;30.
Guo W, Mu D, Xu J, Su P, Wang G, Xing X. Lemna: explaining deep learning based security applications. Inproceedings of the 2018 ACM SIGSAC conference on computer and communications security. 2018;364–379.
Li X, Xiong H, Li X, et al. G-LIME: statistical learning for local interpretations of deep neural networks using global priors. Artif Intell. 2023;314:103823.
Chiu CW, Minku LL. A diversity framework for dealing with multiple types of concept drift based on clustering in the model space. IEEE Trans Neural Netw Learn Syst. 2020;33(3):1299–309.
Liu S, Xue S, Wu J, Zhou C, Yang J, Li Z, Cao J. Online active learning for drifting data streams. IEEE Trans Neural Netw Learn Syst. 2021. https://doi.org/10.1109/TNNLS.2021.3091681.
Bi X, Zhang C, Zhao X, Li D, Sun Y, Ma Y. CODES: Efficient incremental semi-supervised classification over drifting and evolving social streams. IEEE Access. 2020;8:14024–35. https://doi.org/10.1109/ACCESS.2020.2965766.
Li H, Dong W, Hu BG. Incremental concept learning via online generative memory recall. IEEE Trans Neural Netw Learn Syst. 2020;32(7):3206–16. https://doi.org/10.1109/TNNLS.2020.3010581.
Shan J, Zhang H, Liu W, Liu Q. Online active learning ensemble framework for drifted data streams. IEEE Trans Neural Netw Learn Syst. 2018;30(2):486–98.
Petit G, Popescu A, Schindler H, et al. Fetril: feature translation for exemplar-free class-incremental learning[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023;3911–3920.
Li P, He L, Wang H, Hu X, Zhang Y, Li L, Wu X. Learning from short text streams with topic drifts. IEEE Trans Cybern. 2017;48(9):2697–711.
Lu Y, Cheung YM, Tang YY. Adaptive chunk-based dynamic weighted majority for imbalanced data streams with concept drift. IEEE Trans Neural Netw Learn Syst. 2019;31(8):2764–78.
Yang C, Cheung YM, Ding J, Tan KC. Concept drift-tolerant transfer learning in dynamic environments. IEEE Trans Neural Netw Learn Syst. 2021;33(8):3857–71.
Pan Z, Yu X, Zhang M, et al. DyCR: a dynamic clustering and recovering network for few-shot class-incremental learning. IEEE Trans Neural Netw Learn Syst. 2024.
Gehring J, Auli M, Grangier D, Dauphin YN. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344. 2016 Nov 7. https://doi.org/10.48550/arXiv.1611.02344.
Bartoli A, De Lorenzo A, Medvet E, Tarlao F. Active learning of regular expressions for entity extraction. IEEE Trans Cybern. 2017;48(3):1067–80.
Jiang H, He H. Learning from negative links. IEEE Trans Cybern. 2021;52(8):8481–92.
Wu Y, Dong Y, Qin J, Pedrycz W. Linguistic distribution and priority-based approximation to linguistic preference relations with flexible linguistic expressions in decision making. IEEE Trans Cybern. 2020;51(2):649–59.
Pang J, Rao Y, Xie H, Wang X, Wang FL, Wong TL, Li Q. Fast supervised topic models for short text emotion detection. IEEE Trans Cybern. 2019;51(2):815–28.
Wang X, Kou L, Sugumaran V, Luo X, Zhang H. Emotion correlation mining through deep learning models on natural language text. IEEE Trans Cybern. 2021;51(9):4400–13.
Wu Z, Ong DC. Context-guided bert for targeted aspect-based sentiment analysis. Proc AAAI Conf Artif Intell. 2021;35(16):14094–102.
Wu HC, Luk RW, Wong KF, Kwok KL. Interpreting TF-IDF term weights as making relevance decisions. ACM Trans Inf Syst (TOIS). 2008;26(3):1–37.
Pontiki M, Galanis D, Papageorgiou H, Androutsopoulos I, Manandhar S, AL-Smadi M, Al-Ayyoub M, Zhao Y, Qin B, De Clercq O, Hoste V. Semeval-2016 task 5: aspect based sentiment analysis. InProWorkshop on Semantic Evaluation (SemEval-2016). Assoc Comput Linguist. 2016;19–30.
Zhang X, Zhao J, LeCun Y. Character-level convolutional networks for text classification. Adv Neural Inf Process Syst. 2015;28.
Funding
The research reported in this paper was supported by the National Natural Science Foundation of China under the grant 72204155 and Natural Science Foundation of Shanghai under the grant 23ZR1423100.
Author information
Authors and Affiliations
Contributions
Methodology and resources were provided by Xinzhi Wang. Review, editing, and original draft preparation were done by Xinzhi Wang and Mengyue Li. Investigation and visualization were done by Mengyue Li and Chenyang Wang. Software and data curation were done by Mengyue Li. Conceptualization and formal analysis were discussed by Hang Yu and Vijayan Sugumaran. This work was supervised by Hang Yu and Xinzhi Wang.
Corresponding authors
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, X., Li, M., Yu, H. et al. Enhancing Pre-trained Deep Learning Model with Self-Adaptive Reflection. Cogn Comput 16, 3468–3483 (2024). https://doi.org/10.1007/s12559-024-10348-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-024-10348-3
Keywords
Profiles
- Xinzhi Wang View author profile