Skip to main content

A New Facial Expression Processing System for an Affectively Aware Robot

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12662))

Abstract

This paper introduces an emotion recognition system for an affectively aware hospital robot for children, and a data labeling and processing tool called LabelFace for facial expression recognition (FER) to be employed within the presented system. The tool provides an interface for automatic/manual labeling and visual information processing for emotion and facial action unit (AU) recognition with the assistant models based on deep learning. The tool is developed primarily to support the affective intelligence of a socially assistive robot for supporting the healthcare of children with hearing impairments. In the proposed approach, multi-label AU detection models are used for this purpose. To the best of our knowledge, the proposed children AU detector model is the first model which targets 5- to 9- year old children. The model is trained with well-known posed-datasets and tested with a real-world non-posed dataset collected from hearing-impaired children. Our tool LabelFace is compared to a widely-used facial expression tool in terms of data processing and data labeling capabilities for benchmarking, and performs better with its AU detector models for children on both posed-data and non-posed data testing.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.kairos.com/.

  2. 2.

    https://skybiometry.com/.

  3. 3.

    https://findface.pro/en/.

  4. 4.

    https://www.noldus.com/facereader.

References

  1. Al-agha, L.S.A., Saleh, P.H.H., Ghani, P.R.F.: Geometric-based feature extraction and classification for emotion expressions of 3D video film. J. Adv. Inf. Technol. 8(2), 74–79 (2017)

    Google Scholar 

  2. Albiero, V., Bellon, O., Silva, L.: Multi-label action unit detection on multiple head poses with dynamic region learning. In: 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, pp. 2037–2041. IEEE, October 2018. https://doi.org/10.1109/ICIP.2018.8451267

  3. Baltrusaitis, T., Zadeh, A., Lim, Y.C., Morency, L.P.: Openface 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, pp. 59–66. IEEE (2018)

    Google Scholar 

  4. Breuer, R., Kimmel, R.: A deep learning perspective on the origin of facial expressions. arXiv preprint arXiv:1705.01842 (2017)

  5. Dalrymple, K.A., Gomez, J., Duchaine, B.: The Dartmouth database of children’s faces: acquisition and validation of a new face stimulus set. PLoS ONE 8(11), e79131 (2013)

    Article  Google Scholar 

  6. Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971)

    Article  Google Scholar 

  7. Ertugrul, I.O., Yang, L., Jeni, L.A., Cohn, J.F.: D-pattnet: dynamic patch-attentive deep network for action unit detection. Front. Comput. Sci. 1, 11 (2019)

    Article  Google Scholar 

  8. Friesen, W.V., Ekman, P.: Facial action coding system: a technique for the measurement of facial movement. Palo Alto vol. 3 (1978)

    Google Scholar 

  9. Hammal, Z., Chu, W.S., Cohn, J.F., Heike, C., Speltz, M.L.: Automatic action unit detection in infants using convolutional neural network. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA, pp. 216–221. IEEE (2017)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770–778. IEEE, June 2016. https://doi.org/10.1109/CVPR.2016.90

  11. Huang, Y., Yang, J., Liao, P., Pan, J.: Fusion of facial expressions and EEG for multimodal emotion recognition. Comput. Intell. Neurosci. 2017, 1–8 (2017)

    Google Scholar 

  12. Huang, Y., Chen, F., Lv, S., Wang, X.: Facial expression recognition: a survey. Symmetry 11, 1189 (2019). https://doi.org/10.3390/sym11101189

    Article  Google Scholar 

  13. Jain, N., Kumar, S., Kumar, A., Shamsolmoali, P., Zareapoor, M.: Hybrid deep neural networks for face emotion recognition. Pattern Recogn. Lett. 115, 101–106 (2018)

    Article  Google Scholar 

  14. Jiang, Y., Li, W., Hossain, M.S., Chen, M., Alelaiwi, A., Al-Hammadi, M.: A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition. Inf. Fusion 53, 209–221 (2020)

    Article  Google Scholar 

  15. King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)

    Google Scholar 

  16. Ko, B.: A brief review of facial emotion recognition based on visual information. Sensors 18(2), 401 (2018)

    Article  Google Scholar 

  17. LeCun, Y., Haffner, P., Bottou, L., Bengio, Y.: Object recognition with gradient-based learning. Shape, Contour and Grouping in Computer Vision. LNCS, vol. 1681, pp. 319–345. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-46805-6_19

    Chapter  Google Scholar 

  18. Leppänen, J.M., Nelson, C.A.: The development and neural bases of facial emotion recognition. In: Advances in Child Development and Behavior, vol. 34, pp. 207–246. Elsevier (2006)

    Google Scholar 

  19. Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE Trans. Affective Comput. 1 (2020)

    Google Scholar 

  20. Lim, N.: Cultural differences in emotion: differences in emotional arousal level between the east and the west. Integrative Medicine Research 5(2), 105–109 (2016). https://doi.org/10.1016/j.imr.2016.03.004. http://www.sciencedirect.com/science/article/pii/S2213422016300191

  21. Lim, Y.K., Liao, Z., Petridis, S., Pantic, M.: Transfer learning for action unit recognition. ArXiv abs/1807.07556 (2018)

    Google Scholar 

  22. LoBue, V., Thrasher, C.: The child affective facial expression (CAFE) set: validity and reliability from untrained adults. Front. Psychol. 5, 1532 (2015)

    Article  Google Scholar 

  23. Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter conference on applications of computer vision (WACV), Lake Placid, NY, USA, pp. 1–10. IEEE (2016)

    Google Scholar 

  24. Oster, H.: Baby FACS: Facial action coding system for infants and young children. Unpublished monograph and coding manual (2000)

    Google Scholar 

  25. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010). https://doi.org/10.1109/TKDE.2009.191

    Article  Google Scholar 

  26. Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Xie, X., Jones, M.W., Tam, G.K.L. (eds.) Proceedings of the British Machine Vision Conference (BMVC). pp. 41.1–41.12. BMVA Press, Swanse, September 2015. https://doi.org/10.5244/C.29.41. https://dx.doi.org/10.5244/C.29.41

  27. Ranganathan, H., Chakraborty, S., Panchanathan, S.: Multimodal emotion recognition using deep learning architectures. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), , Lake Placid, NY, USA, pp. 1–9. IEEE (2016). https://doi.org/10.1109/WACV.2016.7477679

  28. RoboRehab: Assistive audiology rehabilitation robot. https://roborehab.itu.edu.tr/. Accessed 21 Oct 2020

  29. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  30. Tang, C., et al.: View-independent facial action unit detection. In: 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), Washington, DC, USA, pp. 878–882. IEEE, May 2017. https://doi.org/10.1109/FG.2017.113

  31. Yu, Z., Liu, G., Liu, Q., Deng, J.: Spatio-temporal convolutional features with nested LSTM for facial expression recognition. Neurocomputing 317, 50–57 (2018)

    Article  Google Scholar 

Download references

Acknowledgment

This study is supported by the Scientific and Technological Research Council of Turkey (TUBITAK), RoboRehab project, under contract no 118E214.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Engin Baglayici .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Baglayici, E., Gurpinar, C., Uluer, P., Kose, H. (2021). A New Facial Expression Processing System for an Affectively Aware Robot. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12662. Springer, Cham. https://doi.org/10.1007/978-3-030-68790-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68790-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68789-2

  • Online ISBN: 978-3-030-68790-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics