摘要
提出旨在增强智能教育的大型语言与领域特定模型协作(LDMC)框架。LDMC框架充分利用大型领域通用模型的综合全面知识,将其与小型领域特定模型的专业和学科知识相结合,并融入来自学习理论模型的教育学知识。这种整合产生的多重知识表达促进了个性化和自适应的教育体验。在智能教育背景下探讨了LDMC框架的各种应用,包括群体学习、个性化辅导、课堂管理等。LDMC融合了多种规模模型的智能,代表了一种先进而全面的教育辅助框架。随着人工智能的不断发展,该框架有望在智慧教育领域展现较大潜力。
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Agarwal O, Ge HM, Shakeri S, et al., 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.3554–3565. https://doi.org/10.18653/v1/2021.naacl-main.278
Anderson JR, Boyle CF, Reiser BJ, 1985. Intelligent tutoring systems. Science, 228(4698):456–462. https://doi.org/10.1126/science.228.4698.456
Bajaj R, Sharma V, 2018. Smart education with artificial intelligence based determination of learning styles. Proc Comput Sci, 132:834–842. https://doi.org/10.1016/j.procs.2018.05.095
Dai W, Lin JH, Jin H, et al., 2023. Can large language models provide feedback to students? A case study on ChatGPT. IEEE Int Conf on Advanced Learning Technologies, p.323–325. https://doi.org/10.1109/ICALT58122.2023.00100
Felder RM, Silverman LK, 1988. Learning and teaching styles in engineering education. Eng Educ, 78(7):674–681.
Fleming N, Baume D, 2006. Learning styles again: varking up the right tree! Educ Dev, 7(4):4–7.
Greff K, Srivastava RK, Koutník J, et al., 2017. LSTM: a search space odyssey. IEEE Trans Neur Netw Learn Syst, 28(10):2222–2232. https://doi.org/10.1109/TNNLS.2016.2582924
Griffith S, Subramanian K, Scholz J, et al., 2013. Policy shaping: integrating human feedback with reinforcement learning. Proc 26th Int Conf on Neural Information Processing Systems, p.2625–2633.
Healey M, Jenkins A, 2000. Kolb’s experiential learning theory and its application in geography in higher education. J Geogr, 99(5):185–195. https://doi.org/10.1080/00221340008978967
Hickson L, Worrall L, Scarinci N, 2007. A randomized controlled trial evaluating the active communication education program for older people with hearing impairment. Ear Hear, 28(2):212–230. https://doi.org/10.1097/AUD.0b013e31803126c8
Honey P, Mumford A, 1994. Styles of learning. Gower Handb Manag Dev, 101:101–111.
Hu EJ, Shen YL, Wallis P, et al., 2021. LoRa: low-rank adaptation of large language models. https://arxiv.org/abs/2106.09685
Hwang GJ, 2014. Definition, framework and research issues of smart learning environments—a context-aware ubiquitous learning perspective. Smart Learn Environ, 1(1):4. https://doi.org/10.1186/s40561-014-0004-5
Luo YW, Zheng L, Guan T, et al., 2019. Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.2502–2511. https://doi.org/10.1109/CVPR.2019.00261
Luo YW, Liu P, Zheng L, et al., 2022. Category-level adversarial adaptation for semantic segmentation using purified features. IEEE Trans Patt Anal Mach Intell, 44(8):3940–3956. https://doi.org/10.1109/TPAMI.2021.3064379
Ma SJ, Luo YW, Yang Y, 2023. Personas-based student grouping using reinforcement learning and linear programming. Knowl-Based Syst, 281:111071. https://doi.org/10.1016/j.knosys.2023.111071
Pan YH, 2019. On visual knowledge. Front Inform Technol Electron Eng, 20(8) 1021–1025. https://doi.org/10.1631/FITEE.1910001
Pan YH, 2020. Multiple knowledge representation of artificial intelligence. Engineering, 6(3) 216–217. https://doi.org/10.1016/j.eng.2019.12.011
Pan YH, 2021. Miniaturized five fundamental issues about visual knowledge. Front Inform Technol Electron Eng, 22(5) 615–618. https://doi.org/10.1631/FITEE.2040000
Pan YH, 2022. On visual understanding. Front Inform Technol Electron Eng, 23(9) 1287–1289. https://doi.org/10.1631/FITEE.2130000
Reif E, Ippolito D, Yuan A, et al., 2022. A recipe for arbitrary text style transfer with large language models. Proc 60th Annual Meeting of the Association for Computational Linguistics, p.837–848. https://doi.org/10.18653/v1/2022.acl-short.94
Seo PH, Nagrani A, Schmid C, 2023. AVFormer injecting vision into frozen speech models for zero-shot AV-ASR. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.22922–22931. https://doi.org/10.1109/CVPR52729.2023.02195
Shi DQ, Wang T, Xing H, et al., 2020. A learning path recommendation model based on a multidimensional knowledge graph framework for e-learning. Knowl-Based Syst, 195:105618. https://doi.org/10.1016/j.knosys.2020.105618
Wang J, Tang Y, Hare R, et al., 2023. Parallel intelligent education with ChatGPT. Front Inform Technol Electron Eng, early access. https://doi.org/10.1631/FITEE.2300166
Wang XH, Zhu LC, Zheng ZD, et al., 2022. Align and tell: boosting text-video retrieval with local alignment and fine-grained supervision. IEEE Trans Multim, 25:6079–6089. https://doi.org/10.1109/TMM.2022.3204444
Wang YZ, 2021. An improved machine learning and artificial intelligence algorithm for classroom management of English distance education. J Intell Fuzzy Syst, 40(2):3477–3488. https://doi.org/10.3233/JIFS-189385
Wilson JM, Goodman PS, Cronin MA, 2007. Group learning. Acad Manag Rev, 32(4):1041–1059. https://doi.org/10.5465/amr.2007.26585724
Yang Y, Zhuang YT, Pan YH, 2021. Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies. Front Inform Technol Electron Eng, 22(12):1551–1558. https://doi.org/10.1631/FITEE.2100463
Yang Y, Zhuang YT, Pan YH, 2022. The review of visual knowledge: a new pivot for cross-media intelligence evolution. J Image Graph, 27(9):2574–2588 (in Chinese). https://doi.org/10.11834/jig.211264
Ye PJ, Wang X, Zheng WB, et al., 2022. Parallel cognition: hybrid intelligence for human-machine interaction and management. Front Inform Technol Electron Eng, 23(12):1765–1779. https://doi.org/10.1631/FITEE.2100335
Zamfirescu-Pereira JD, Wong RY, Hartmann B, et al., 2023. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. Proc CHI Conf on Human Factors in Computing Systems, Article 437. https://doi.org/10.1145/3544548.3581388
Zhang XT, Li CY, Zong Y, et al., 2023. Evaluating the performance of large language models on Gaokao benchmark. https://arxiv.org/abs/2305.12474
Zhang Y, Jin R, Zhou ZH, 2010. Understanding bag-of-words model: a statistical framework. Int J Mach Learn Cybern, 1(1):43–52. https://doi.org/10.1007/s13042-010-0001-0
Zhou J, Ke P, Qiu XP, et al., 2023. ChatGPT: potential, prospects, and limitations. Front Inform Technol Electron Eng, early access. https://doi.org/10.1631/FITEE.2300089
Zhuang YT, Tang SL, 2021. Visual knowledge: an attempt to explore machine creativity. Front Inform Technol Electron Eng, 22(5):619–624. https://doi.org/10.1631/FITEE.2100116
Author information
Authors and Affiliations
Contributions
Yawei LUO designed the research, conducted the experiment, and drafted the paper. Yi YANG revised and finalized the paper.
Corresponding author
Ethics declarations
Yi YANG is an editorial board member of Frontiers of Information Technology & Electronic Engineering, and he was not involved with the peer review process of this paper. Both authors declare that they have no conflict of interest.
Additional information
Project supported by the National Key R&D Program of China (No. 2020AAA0108800), the National Natural Science Foundation of China (Nos. 62293554, 62206249, and U2336212), the Natural Science Foundation of Zhejiang Province, China (No. LZ24F020002), and the Young Elite Scientists Sponsorship Program by CAST (No. 2023QNRC001)
Rights and permissions
About this article
Cite this article
Luo, Y., Yang, Y. Large language model and domain-specific model collaboration for smart education. Front Inform Technol Electron Eng 25, 333–341 (2024). https://doi.org/10.1631/FITEE.2300747
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1631/FITEE.2300747