Abstract
Affective interaction in tutoring environments has been of great interest among several researchers in this community, which has spurred the development of various systems to capture learners’ emotional states. Young children are one of the biggest learner groups in digital learning environments, but these studies have rarely targeted them. Our current study leverages computer vision and deep learning to analyze young childrens’ learning-related affective states. We developed an effective recognition system to compute the probability for a child to present neutral or positive affective state. Our results showed that the prototype was able to achieve an average affective state prediction accuracy of 93.05%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Barsoum, E., Zhang, C., Ferrer, C.C., Zhang, Z.: Training deep networks for facial expression recognition with crowd-sourced label distribution. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 279–283 (2016)
Bosch, N., D’Mello, S.: The affective experience of novice computer programmers. Int. J. Artif. Intell. Educ. 27(1), 181–206 (2017)
Breazeal, C.L.: Designing Sociable Robots. MIT Press, Cambridge (2004)
Caramihale, T., Popescu, D., Ichim, L.: Emotion classification using a tensorflow generative adversarial network implementation. Symmetry 10(9), 414 (2018)
D’Mello, S., Graesser, A.: Dynamics of affective states during complex learning. Learn. Instr. 22(2), 145–157 (2012)
Egger, H.L., et al.: The NIMH child emotional faces picture set (NIMH-ChEFS): a new set of children’s facial emotion stimuli. Int. J. Methods Psychiatr. Res. 20(3), 145–156 (2011)
Kalsum, T., Anwar, S.M., Majid, M., Khan, B., Ali, S.M.: Emotion recognition from facial expressions using hybrid feature descriptors. IET Image Process. 12(6), 1004–1012 (2018)
Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via a boosted deep belief network. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1805–1812 (2014)
Matsugu, M., Mori, K., Mitari, Y., Kaneda, Y.: Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 16(5), 555–559 (2003)
Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)
Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4293–4302 (2016)
Ocumpaugh, J., Baker, R., Rodrigo, M.: Monitoring protocol (BROMP) 2.0 technical & training manual. Teachers College, New York, NY (2015)
Ocumpaugh, J., Baker, R.S.J., Gaudino, S., Labrum, M.J., Dezendorf, T.: Field observations of engagement in reasoning mind. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS (LNAI), vol. 7926, pp. 624–627. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39112-5_74
Okur, E., Aslan, S., Alyuz, N., Arslan Esme, A., Baker, R.S.: Role of socio-cultural differences in labeling students’ affective states. In: Artificial Intelligence in Education, pp. 367–380 (2018)
Ranjbartabar, H., Richards, D., Makhija, A., Jacobson, M.J.: Students’ responses to a humanlike approach to elicit emotion in an educational virtual world. In: Artificial Intelligence in Education, pp. 291–295 (2018)
Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27(6), 803–816 (2009)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Turkle, S.: Alone Together: Why We Expect More from Technology and Less from Each Other. Hachette, New York (2017)
Yu, Z., Zhang, C.: Image based static facial expression recognition with multiple deep network learning. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 435–442 (2015)
Zeng, N., Zhang, H., Song, B., Liu, W., Li, Y., Dobaie, A.M.: Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273, 643–649 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Farzaneh, A.H., Kim, Y., Zhou, M., Qi, X. (2019). Developing a Deep Learning-Based Affect Recognition System for Young Children. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11626. Springer, Cham. https://doi.org/10.1007/978-3-030-23207-8_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-23207-8_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-23206-1
Online ISBN: 978-3-030-23207-8
eBook Packages: Computer ScienceComputer Science (R0)