ABSTRACT
Recently, there has been a growing interest in improving students' competitiveness in STEM education. Self-reporting and observation are the most used tools for the assessment of STEM education. Despite their effectiveness, such assessment tools face several challenges, such as being labor-intensive and time-consuming, prone to subjective awareness, depending on memory limitations, and being influenced due to social expectations. To address these challenges, in this research, we propose an approach called Image4Assess that---by benefiting from state-of-the-art machine learning like convolutional neural networks and transfer learning---automatically and uninterruptedly assesses students' learning processes during STEM activities using image processing. Our findings reveal that the Image4Assess approach can achieve accuracy, precision, and recall higher than 85% in the learning process recognition of students. This implies that it is feasible to accurately measure the learning process of students in STEM education using their imagery data. We also found that there is a significant correlation between the learning processes automatically identified by our proposed approach and students' post-test, confirming the effectiveness of the proposed approach in real-world classrooms.
- Bybee, R.W., The case for STEM education: Challenges and opportunities. 2013.Google Scholar
- Martín-Páez, T., et al., What are we talking about when we talk about STEM education? A review of literature. Science Education, 2019. 103(4): p. 799--822. Google ScholarCross Ref
- Sanders, M., Integrative STEM education: primer. The Technology Teacher, 2009. 68(4): p. 20--26.Google Scholar
- Hsiao, J.-C., et al., Developing a plugged-in class observation protocol in high-school blended STEM classes: Student engagement, teacher behaviors and student-teacher interaction patterns. Computers & Education, 2022. 178: p. 104403. Google ScholarDigital Library
- Christensen, R., G. Knezek, and T. Tyler-Wood, Alignment of hands-on STEM engagement activities with positive STEM dispositions in secondary school students. Journal of Science Education and Technology, 2015. 24(6): p. 898--909. Google ScholarCross Ref
- Gao, X., et al., Reviewing assessment of student learning in interdisciplinary STEM education. International Journal of STEM Education, 2020. 7(1): p. 1--14. Google ScholarCross Ref
- Baumeister, R.F., K.D. Vohs, and D.C. Funder, Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on psychological science, 2007. 2(4): p. 396--403. Google ScholarCross Ref
- Paulhus, D.L. and S. Vazire, The self-report method. Handbook of research methods in personality psychology, 2007. 1(2007): p. 224--239Google Scholar
- D'Mello, S., E. Dieterle, and A. Duckworth, Advanced, analytic, automated (AAA) measurement of engagement during learning. Educational psychologist, 2017. 52(2): p. 104--123. Google ScholarCross Ref
- Chi, M.T. and R. Wylie, The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational psychologist, 2014. 49(4): p. 219--243. Google ScholarCross Ref
- Harari, G.M., et al., Smartphone sensing methods for studying behavior in everyday life. Current opinion in behavioral sciences, 2017. 18: p. 83--90. Google ScholarCross Ref
- Lathia, N., et al. Contextual dissonance: Design bias in sensor-based experience sampling methods. in Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. 2013. Google ScholarDigital Library
- Chen, J.C., et al., Developing a hands-on activity using virtual reality to help students learn by doing. Journal of Computer Assisted Learning, 2020. 36(1): p. 46--60. Google ScholarCross Ref
- Sun, D., et al., Comparing learners' knowledge, behaviors, and attitudes between two instructional modes of computer programming in secondary education. International journal of STEM education, 2021. 8(1): p. 1--15Google Scholar
- Liu, T., Z. Chen, and X. Wang. Automatic instructional pointing gesture recognition by machine learning in the intelligent learning environment. in Proceedings of the 2019 4th international conference on distance education and learning. 2019. Google ScholarDigital Library
- Zhang, Z., et al., Data-driven online learning engagement detection via facial expression and mouse behavior recognition technology. Journal of Educational Computing Research, 2020. 58(1): p. 63--86. Google ScholarCross Ref
- Kim, H., et al., Evaluation of a Computer Vision-Based System to Analyse Behavioral Changes in High School Classrooms. International Journal of Information and Communication Technology Education (IJICTE), 2021. 17(4): p. 1--12. Google ScholarDigital Library
- Majd, M. and R. Safabakhsh, Correlational convolutional LSTM for human action recognition. Neurocomputing, 2020. 396: p. 224--229. Google ScholarCross Ref
- Schoneveld, L., A. Othmani, and H. Abdelkawy, Leveraging recent advances in deep learning for audio-visual emotion recognition. Pattern Recognition Letters, 2021. 146: p. 1--7. Google ScholarCross Ref
- Demrozi, F., et al., Human activity recognition using inertial, physiological and environmental sensors: A comprehensive survey. IEEE Access, 2020. 8: p. 210816--210836. Google ScholarCross Ref
- Nweke, H.F., et al., Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Information Fusion, 2019. 46: p. 147--170. Google ScholarDigital Library
- Cui, W., et al., Device-free single-user activity recognition using diversified deep ensemble learning. Applied Soft Computing, 2021. 102: p. 107066. Google ScholarDigital Library
- Liu, J., G. Teng, and F. Hong, Human activity sensing with wireless signals: A survey. Sensors, 2020. 20(4): p. 1210. Google ScholarCross Ref
- Muhammad, K., et al., Human action recognition using attention based LSTM network with dilated CNN features. Future Generation Computer Systems, 2021. 125: p. 820--830. Google ScholarDigital Library
- Jaouedi, N., N. Boujnah, and M.S. Bouhlel, A new hybrid deep learning model for human action recognition. Journal of King Saud University-Computer and Information Sciences, 2020. 32(4): p. 447--453. Google ScholarCross Ref
- Khan, M.A., et al., Hand-crafted and deep convolutional neural network features fusion and selection strategy: an application to intelligent human action recognition. Applied Soft Computing, 2020. 87: p. 105986. Google ScholarDigital Library
- Kamel, A., et al., Deep convolutional neural networks for human action recognition using depth maps and postures. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018. 49(9): p. 1806--1819. Google ScholarCross Ref
- Lin, K.-C., et al., The effect of real-time pose recognition on badminton learning performance. Interactive Learning Environments, 2021: p. 1--15. Google ScholarCross Ref
- Al-Naji, A., et al., Real time apnoea monitoring of children using the Microsoft Kinect sensor: a pilot study. Sensors, 2017. 17(2): p. 286. Google ScholarCross Ref
- Cao, Z., et al. Realtime multi-person 2d pose estimation using part affinity fields. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.Google ScholarCross Ref
- Nakai, M., et al. Prediction of basketball free throw shooting by openpose. in JSAI International symposium on artificial intelligence. 2018. Springer.Google Scholar
- Rathod, V., et al., Smart surveillance and real-time human action recognition using OpenPose, in ICDSMLA 2019. 2020, Springer. p. 504--509. 53 Google ScholarCross Ref
- Hofstein, A. and V.N. Lunetta, The role of the laboratory in science teaching: Neglected aspects of research. Review of educational research, 1982. 52(2): p. 201--217. Google ScholarCross Ref
- Bochkovskiy, A., C.-Y. Wang, and H.-Y.M. Liao, Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. Google ScholarCross Ref
- Thuneberg, H., H. Salmi, and F.X. Bogner, How creativity, autonomy and visual reasoning contribute to cognitive learning in a STEAM hands-on inquiry-based math module. Thinking Skills and Creativity, 2018. 29: p. 153--160. Google ScholarCross Ref
- Graesser, A.C., Emotions are the experiential glue of learning environments in the 21st century. Learning and Instruction, 2020. 70: p. 101212. Google ScholarCross Ref
- Liu, S., et al., Automated detection of emotional and cognitive engagement in MOOC discussions to predict learning achievement. Computers & Education, 2022. 181: p. 104461. Google ScholarDigital Library
Index Terms
- Image4Assess: Automatic learning processes recognition using image processing
Recommendations
Learning behavior and performance of asynchronous distance learning with face recognition system among in-service teachers
AIC'10/BEBI'10: Proceedings of the 10th WSEAS international conference on applied informatics and communications, and 3rd WSEAS international conference on Biomedical electronics and biomedical informaticsThe purpose of this study focused on the learning behavior and performance of asynchronous distance learning with face recognition system among in-service teachers. Web set up the "distance learning - Excel practical course" for K-12 school teachers ...
Using Arduino in Service Learning to Engage Pre-service STEM Teachers into Collaborative Learning
Learning and Collaboration Technologies. Human and Technology EcosystemsAbstractEngineers must be able to collaborate with colleagues to successfully respond to the complex challenges of a contemporary workplace, but the related training and research are limited, especially in the curriculum design. In addition, service ...
Let’s Learn! An Initial Guide on Using Drones to Teach STEM for Children
Learning and Collaboration Technologies. Human and Technology EcosystemsAbstractAlthough the number of careers calling for training and knowledge of STEM topics continue to grow, the presence of STEM education in K-12 is lacking. Therefore, there is a need to broaden the presence of STEM in current curricula. There is also a ...
Comments