Skip to main content

Combining Deep Learning and Computer Vision Techniques for Automatic Analysis of the Learning Process in STEM Education

  • Conference paper
  • First Online:
Innovative Technologies and Learning (ICITL 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13449))

Included in the following conference series:

Abstract

STEM education has been a focus in recent years, evidenced by the increasing number of studies conducted on STEM education to enhance the future competitiveness of learners. Compared with traditional teaching methods, learning outcomes in STEM education focus on what is learned during the process of collaboration and problem-solving rather than on the score of the final exam or final project. However, most assessment tools measure learning outcomes using questionnaires or interviews, which lack objective standards and require time for data processing. We address these problems with a system that combines deep learning and computer vision techniques to automatically recognize the learner’s learning process in STEM education. System verification reveals an average precision of 87.1% and an average recall of 86.4%, which is sufficient to keep track of the learning process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bybee, R.W.: The Case for STEM Education: Challenges and Opportunities. NSTA Press (2013)

    Google Scholar 

  2. Sanders, M.: Integrative STEM education: primer. Technol. Teach. 68, 20–26 (2009)

    Google Scholar 

  3. Martín-Páez, T., Aguilera, D., Perales-Palacios, F.J., Vílchez-González, J.M.: What are we talking about when we talk about STEM education? A review of literature. Sci. Educ. 103, 799–822 (2019)

    Article  Google Scholar 

  4. Hsiao, J.-C., Chen, S.-K., Chen, W., Lin, S.S.: Developing a plugged-in class observation protocol in high-school blended STEM classes: student engagement, teacher behaviors and student-teacher interaction patterns. Comput. Educ. 178, 104403 (2022)

    Article  Google Scholar 

  5. Christensen, R., Knezek, G., Tyler-Wood, T.: Alignment of hands-on STEM engagement activities with positive STEM dispositions in secondary school students. J. Sci. Educ. Technol. 24, 898–909 (2015). https://doi.org/10.1007/s10956-015-9572-6

    Article  Google Scholar 

  6. Gao, X., Li, P., Shen, J., Sun, H.: Reviewing assessment of student learning in interdisciplinary STEM education. Int. J. STEM Educ. 7(1), 1–14 (2020). https://doi.org/10.1186/s40594-020-00225-4

    Article  Google Scholar 

  7. Zimmerman, B.J.: Investigating self-regulation and motivation: historical background, methodological developments, and future prospects. Am. Educ. Res. J. 45, 166–183 (2008)

    Article  Google Scholar 

  8. Harari, G.M., Müller, S.R., Aung, M.S., Rentfrow, P.J.: Smartphone sensing methods for studying behavior in everyday life. Curr. Opin. Behav. Sci. 18, 83–90 (2017)

    Article  Google Scholar 

  9. Lathia, N., Rachuri, K.K., Mascolo, C., Rentfrow, P.J.: Contextual dissonance: design bias in sensor-based experience sampling methods. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 183–192 (2013)

    Google Scholar 

  10. Paulhus, D.L., Vazire, S.: The self-report method. Handbook of research methods in personality psychology, vol. 1, pp. 224–239 (2007)

    Google Scholar 

  11. D’Mello, S., Dieterle, E., Duckworth, A.: Advanced, analytic, automated (AAA) measurement of engagement during learning. Educ. Psychol. 52, 104–123 (2017)

    Article  Google Scholar 

  12. Ekatushabe, M., Kwarikunda, D., Muwonge, C.M., Ssenyonga, J., Schiefele, U.: Relations between perceived teacher’s autonomy support, cognitive appraisals and boredom in physics learning among lower secondary school students. Int. J. STEM Educ. 8(1), 1–15 (2021). https://doi.org/10.1186/s40594-021-00272-5

    Article  Google Scholar 

  13. Sahin, D., Yilmaz, R.M.: The effect of augmented reality technology on middle school students’ achievements and attitudes towards science education. Comput. Educ. 144, 103710 (2020)

    Article  Google Scholar 

  14. Chen, J.C., et al.: Developing a hands-on activity using virtual reality to help students learn by doing. J. Comput. Assist. Learn. 36, 46–60 (2020)

    Article  Google Scholar 

  15. Sun, D., Ouyang, F., Li, Y., Zhu, C.: Comparing learners’ knowledge, behaviors, and attitudes between two instructional modes of computer programming in secondary education. Int. J. STEM Educ. 8(1), 1–15 (2021). https://doi.org/10.1186/s40594-021-00311-1

    Article  Google Scholar 

  16. Ashwin, T., Guddeti, R.M.R.: Unobtrusive behavioral analysis of students in classroom environment using non-verbal cues. IEEE Access 7, 150693–150709 (2019)

    Article  Google Scholar 

  17. Barbadekar, A., et al.: Engagement index for classroom lecture using computer vision. In: 2019 Global Conference for Advancement in Technology (GCAT), pp. 1–5. IEEE (2015)

    Google Scholar 

  18. Kim, H., O’Sullivan, D., Kolykhalova, K., Camurri, A., Park, Y.: Evaluation of a computer vision-based system to analyse behavioral changes in high school classrooms. Int. J. Inf. Commun. Technol. Educ. (IJICTE) 17, 1–12 (2021)

    Article  Google Scholar 

  19. Khan, M.A., Zhang, Y.-D., Khan, S.A., Attique, M., Rehman, A., Seo, S.: A resource conscious human action recognition framework using 26-layered deep convolutional neural network. Multimedia Tools Appl. 80(28–29), 35827–35849 (2020). https://doi.org/10.1007/s11042-020-09408-1

    Article  Google Scholar 

  20. Majd, M., Safabakhsh, R.: Correlational convolutional LSTM for human action recognition. Neurocomputing 396, 224–229 (2020)

    Article  Google Scholar 

  21. Demrozi, F., Pravadelli, G., Bihorac, A., Rashidi, P.: Human activity recognition using inertial, physiological and environmental sensors: a comprehensive survey. IEEE Access 8, 210816–210836 (2020)

    Article  Google Scholar 

  22. Cui, W., Li, B., Zhang, L., Chen, Z.: Device-free single-user activity recognition using diversified deep ensemble learning. Appl. Soft Comput. 102, 107066 (2021)

    Article  Google Scholar 

  23. Kamel, A., Sheng, B., Yang, P., Li, P., Shen, R., Feng, D.D.: Deep convolutional neural networks for human action recognition using depth maps and postures. IEEE Trans. Syst. Man Cybern. Syst. 49, 1806–1819 (2018)

    Article  Google Scholar 

  24. Lin, K.-C., Ko, C.-W., Hung, H.-C., Chen, N.-S.: The effect of real-time pose recognition on badminton learning performance. Interact. Learn. Environ. 1–15 (2021)

    Google Scholar 

  25. Al-Naji, A., Gibson, K., Lee, S.-H., Chahl, J.: Real time apnoea monitoring of children using the Microsoft Kinect sensor: a pilot study. Sensors 17, 286 (2017)

    Article  Google Scholar 

  26. Cao, Z., Simon, T., Wei, S.-E., Sheikh, Y.: Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291–7299 (2017)

    Google Scholar 

  27. Yan, H., Hu, B., Chen, G., Zhengyuan, E.: Real-time continuous human rehabilitation action recognition using OpenPose and FCN. In: 2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE), pp. 239–242. IEEE (2020)

    Google Scholar 

  28. Wu, E.Q., Tang, Z.-R., Xiong, P., Wei, C.-F., Song, A., Zhu, L.-M.: ROpenPose: a rapider OpenPose model for astronaut operation attitude detection. IEEE Trans. Industr. Electron. 69, 1043–1052 (2021)

    Article  Google Scholar 

  29. Hofstein, A., Lunetta, V.N.: The role of the laboratory in science teaching: neglected aspects of research. Rev. Educ. Res. 52, 201–217 (1982)

    Article  Google Scholar 

  30. Bochkovskiy, A., Wang, C.-Y., Liao, H.-Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yueh-Min Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, HY., Chang, WC., Huang, YM. (2022). Combining Deep Learning and Computer Vision Techniques for Automatic Analysis of the Learning Process in STEM Education. In: Huang, YM., Cheng, SC., Barroso, J., Sandnes, F.E. (eds) Innovative Technologies and Learning. ICITL 2022. Lecture Notes in Computer Science, vol 13449. Springer, Cham. https://doi.org/10.1007/978-3-031-15273-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15273-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15272-6

  • Online ISBN: 978-3-031-15273-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics