skip to main content
10.1145/3555776.3577643acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Image4Assess: Automatic learning processes recognition using image processing

Published:07 June 2023Publication History

ABSTRACT

Recently, there has been a growing interest in improving students' competitiveness in STEM education. Self-reporting and observation are the most used tools for the assessment of STEM education. Despite their effectiveness, such assessment tools face several challenges, such as being labor-intensive and time-consuming, prone to subjective awareness, depending on memory limitations, and being influenced due to social expectations. To address these challenges, in this research, we propose an approach called Image4Assess that---by benefiting from state-of-the-art machine learning like convolutional neural networks and transfer learning---automatically and uninterruptedly assesses students' learning processes during STEM activities using image processing. Our findings reveal that the Image4Assess approach can achieve accuracy, precision, and recall higher than 85% in the learning process recognition of students. This implies that it is feasible to accurately measure the learning process of students in STEM education using their imagery data. We also found that there is a significant correlation between the learning processes automatically identified by our proposed approach and students' post-test, confirming the effectiveness of the proposed approach in real-world classrooms.

References

  1. Bybee, R.W., The case for STEM education: Challenges and opportunities. 2013.Google ScholarGoogle Scholar
  2. Martín-Páez, T., et al., What are we talking about when we talk about STEM education? A review of literature. Science Education, 2019. 103(4): p. 799--822. Google ScholarGoogle ScholarCross RefCross Ref
  3. Sanders, M., Integrative STEM education: primer. The Technology Teacher, 2009. 68(4): p. 20--26.Google ScholarGoogle Scholar
  4. Hsiao, J.-C., et al., Developing a plugged-in class observation protocol in high-school blended STEM classes: Student engagement, teacher behaviors and student-teacher interaction patterns. Computers & Education, 2022. 178: p. 104403. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Christensen, R., G. Knezek, and T. Tyler-Wood, Alignment of hands-on STEM engagement activities with positive STEM dispositions in secondary school students. Journal of Science Education and Technology, 2015. 24(6): p. 898--909. Google ScholarGoogle ScholarCross RefCross Ref
  6. Gao, X., et al., Reviewing assessment of student learning in interdisciplinary STEM education. International Journal of STEM Education, 2020. 7(1): p. 1--14. Google ScholarGoogle ScholarCross RefCross Ref
  7. Baumeister, R.F., K.D. Vohs, and D.C. Funder, Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on psychological science, 2007. 2(4): p. 396--403. Google ScholarGoogle ScholarCross RefCross Ref
  8. Paulhus, D.L. and S. Vazire, The self-report method. Handbook of research methods in personality psychology, 2007. 1(2007): p. 224--239Google ScholarGoogle Scholar
  9. D'Mello, S., E. Dieterle, and A. Duckworth, Advanced, analytic, automated (AAA) measurement of engagement during learning. Educational psychologist, 2017. 52(2): p. 104--123. Google ScholarGoogle ScholarCross RefCross Ref
  10. Chi, M.T. and R. Wylie, The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational psychologist, 2014. 49(4): p. 219--243. Google ScholarGoogle ScholarCross RefCross Ref
  11. Harari, G.M., et al., Smartphone sensing methods for studying behavior in everyday life. Current opinion in behavioral sciences, 2017. 18: p. 83--90. Google ScholarGoogle ScholarCross RefCross Ref
  12. Lathia, N., et al. Contextual dissonance: Design bias in sensor-based experience sampling methods. in Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Chen, J.C., et al., Developing a hands-on activity using virtual reality to help students learn by doing. Journal of Computer Assisted Learning, 2020. 36(1): p. 46--60. Google ScholarGoogle ScholarCross RefCross Ref
  14. Sun, D., et al., Comparing learners' knowledge, behaviors, and attitudes between two instructional modes of computer programming in secondary education. International journal of STEM education, 2021. 8(1): p. 1--15Google ScholarGoogle Scholar
  15. Liu, T., Z. Chen, and X. Wang. Automatic instructional pointing gesture recognition by machine learning in the intelligent learning environment. in Proceedings of the 2019 4th international conference on distance education and learning. 2019. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Zhang, Z., et al., Data-driven online learning engagement detection via facial expression and mouse behavior recognition technology. Journal of Educational Computing Research, 2020. 58(1): p. 63--86. Google ScholarGoogle ScholarCross RefCross Ref
  17. Kim, H., et al., Evaluation of a Computer Vision-Based System to Analyse Behavioral Changes in High School Classrooms. International Journal of Information and Communication Technology Education (IJICTE), 2021. 17(4): p. 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Majd, M. and R. Safabakhsh, Correlational convolutional LSTM for human action recognition. Neurocomputing, 2020. 396: p. 224--229. Google ScholarGoogle ScholarCross RefCross Ref
  19. Schoneveld, L., A. Othmani, and H. Abdelkawy, Leveraging recent advances in deep learning for audio-visual emotion recognition. Pattern Recognition Letters, 2021. 146: p. 1--7. Google ScholarGoogle ScholarCross RefCross Ref
  20. Demrozi, F., et al., Human activity recognition using inertial, physiological and environmental sensors: A comprehensive survey. IEEE Access, 2020. 8: p. 210816--210836. Google ScholarGoogle ScholarCross RefCross Ref
  21. Nweke, H.F., et al., Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Information Fusion, 2019. 46: p. 147--170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Cui, W., et al., Device-free single-user activity recognition using diversified deep ensemble learning. Applied Soft Computing, 2021. 102: p. 107066. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Liu, J., G. Teng, and F. Hong, Human activity sensing with wireless signals: A survey. Sensors, 2020. 20(4): p. 1210. Google ScholarGoogle ScholarCross RefCross Ref
  24. Muhammad, K., et al., Human action recognition using attention based LSTM network with dilated CNN features. Future Generation Computer Systems, 2021. 125: p. 820--830. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Jaouedi, N., N. Boujnah, and M.S. Bouhlel, A new hybrid deep learning model for human action recognition. Journal of King Saud University-Computer and Information Sciences, 2020. 32(4): p. 447--453. Google ScholarGoogle ScholarCross RefCross Ref
  26. Khan, M.A., et al., Hand-crafted and deep convolutional neural network features fusion and selection strategy: an application to intelligent human action recognition. Applied Soft Computing, 2020. 87: p. 105986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Kamel, A., et al., Deep convolutional neural networks for human action recognition using depth maps and postures. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018. 49(9): p. 1806--1819. Google ScholarGoogle ScholarCross RefCross Ref
  28. Lin, K.-C., et al., The effect of real-time pose recognition on badminton learning performance. Interactive Learning Environments, 2021: p. 1--15. Google ScholarGoogle ScholarCross RefCross Ref
  29. Al-Naji, A., et al., Real time apnoea monitoring of children using the Microsoft Kinect sensor: a pilot study. Sensors, 2017. 17(2): p. 286. Google ScholarGoogle ScholarCross RefCross Ref
  30. Cao, Z., et al. Realtime multi-person 2d pose estimation using part affinity fields. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.Google ScholarGoogle ScholarCross RefCross Ref
  31. Nakai, M., et al. Prediction of basketball free throw shooting by openpose. in JSAI International symposium on artificial intelligence. 2018. Springer.Google ScholarGoogle Scholar
  32. Rathod, V., et al., Smart surveillance and real-time human action recognition using OpenPose, in ICDSMLA 2019. 2020, Springer. p. 504--509. 53 Google ScholarGoogle ScholarCross RefCross Ref
  33. Hofstein, A. and V.N. Lunetta, The role of the laboratory in science teaching: Neglected aspects of research. Review of educational research, 1982. 52(2): p. 201--217. Google ScholarGoogle ScholarCross RefCross Ref
  34. Bochkovskiy, A., C.-Y. Wang, and H.-Y.M. Liao, Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. Google ScholarGoogle ScholarCross RefCross Ref
  35. Thuneberg, H., H. Salmi, and F.X. Bogner, How creativity, autonomy and visual reasoning contribute to cognitive learning in a STEAM hands-on inquiry-based math module. Thinking Skills and Creativity, 2018. 29: p. 153--160. Google ScholarGoogle ScholarCross RefCross Ref
  36. Graesser, A.C., Emotions are the experiential glue of learning environments in the 21st century. Learning and Instruction, 2020. 70: p. 101212. Google ScholarGoogle ScholarCross RefCross Ref
  37. Liu, S., et al., Automated detection of emotional and cognitive engagement in MOOC discussions to predict learning achievement. Computers & Education, 2022. 181: p. 104461. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Image4Assess: Automatic learning processes recognition using image processing

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SAC '23: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing
        March 2023
        1932 pages
        ISBN:9781450395175
        DOI:10.1145/3555776

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 June 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,650of6,669submissions,25%
      • Article Metrics

        • Downloads (Last 12 months)77
        • Downloads (Last 6 weeks)4

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader