Skip to main content

Research on Interactive Intent Recognition Based on Facial Expression and Line of Sight Direction

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11888))

Included in the following conference series:

Abstract

Interaction intent recognition refers to the discrimination and prediction of whether a person (user) wants to interact with the robot during the human-robot interaction (HRI) process. Interactive intent recognition is one of the key technologies of intelligent robots. This paper mainly studies the interactive intent recognition method based on visual images, which is of great significance to improve the intelligence of robots. In the process of communication between people, people often make different interactions according to each other’s emotional state. At present, the visual-based interactive intent recognition method mainly utilizes the user’s gesture, line of sight direction, and head posture to judge the interaction intention, and has not found the interactive intention recognition method based on the user’s emotional state. Therefore, this paper proposes an interactive intent recognition algorithm that combines facial expression features and line of sight directions. The experimental results show that the accuracy of the intent recognition algorithm including expression recognition is 93.3%, and the accuracy of the intent recognition algorithm without expression recognition is 83%. Therefore, the performance of the intent recognition algorithm is significantly improved after the expression recognition is increased.

Supported by Natural Science Foundation of Tianjin (Grant No. 16JCYBJC42300, 17JCQNJC00100, 18JCYBJC44000, 18JCYBJC15300) and National Natural Science Foundation of China (Grant No. 6180021345, 61771340).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adams Jr., R.B., Kleck, R.E.: Perceived gaze direction and the processing of facial displays of emotion. Psychol. Sci. 14(6), 644–647 (2003)

    Article  Google Scholar 

  2. Adolphs, R.: Recognizing emotion from facial expressions: psychological and neurological mechanisms. Behav. Cogn. Neurosci. Rev. 1(1), 21–62 (2002)

    Article  Google Scholar 

  3. Argyle, M., Ingham, R.: Gaze, mutual gaze, and proximity. Semiotica 6(1), 32–49 (1972)

    Article  Google Scholar 

  4. Armentano, M.G., Amandi, A.: Plan recognition for interface agents. Artif. Intell. Rev. 28(2), 131–162 (2007)

    Article  Google Scholar 

  5. Baxter, P., Kennedy, J., Senft, E., Lemaignan, S., Belpaeme, T.: From characterising three years of HRI to methodology and reporting recommendations. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 391–398. IEEE (2016)

    Google Scholar 

  6. Charniak, E., Goldman, R.P.: A bayesian model of plan recognition. Artif. Intell. 64(1), 53–79 (1993)

    Article  Google Scholar 

  7. Dutta, V., Zielinska, T.: Predicting the intention of human activities for real-time human-robot interaction (HRI). In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 723–734. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3_71

    Chapter  Google Scholar 

  8. Ekman, P., Friesen, W.V., Hager, J.C.: Facial action coding system: The manual on CD-ROM. A Human Face, Salt Lake City, pp. 77–254 (2002)

    Google Scholar 

  9. Goldin-Meadow, S.: The role of gesture in communication and thinking. Trends Cogn. Sci. 3(11), 419–429 (1999)

    Article  Google Scholar 

  10. Goodrich, M.A., Schultz, A.C., et al.: Human-robot interaction: a survey. Found. Trends Hum. Comput. Interact. 1(3), 203–275 (2008)

    Article  Google Scholar 

  11. Heinze, C.: Modelling intention recognition for intelligent agent systems. Technical report, defence science and technology organisation salisbury (Australia) systems... (2004)

    Google Scholar 

  12. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 94–101. IEEE (2010)

    Google Scholar 

  13. McNeill, D.: Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press, Chicago (1992)

    Google Scholar 

  14. Menne, I.M., Lugrin, B.: In the face of emotion: a behavioral study on emotions towards a robot using the facial action coding system. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 205–206. ACM (2017)

    Google Scholar 

  15. Ming, G.: Research on Oriental Emotion Recognition Based on Dynamic Facial Expressions. Master’s thesis, University of Electronic Science and Technology (2016)

    Google Scholar 

  16. Novikova, J., Watts, L.: Towards artificial emotions to assist social coordination in HRI. Int. J. Social Robot. 7(1), 77–88 (2015)

    Article  Google Scholar 

  17. Nurmi, J.E., Toivonen, S., Salmela-Aro, K., Eronen, S.: Optimistic, approach-oriented, and avoidance strategies in social situations: three studies on loneliness and peer relationships. Eur. J. Pers. 10(3), 201–219 (1996)

    Article  Google Scholar 

  18. O’Haire, H.E.: The influence of gaze direction on approach-vs. avoidance-oriented emotions. Inquiries J. 3(03) (2011)

    Google Scholar 

  19. Ryoo, M.S., Matthies, L.: First-person activity recognition: What are they doing to me? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2730–2737 (2013)

    Google Scholar 

  20. Sigalas, M., Pateraki, M., Trahanias, P.: Visual estimation of attentive cues in HRI: the case of torso and head pose. In: Nalpantidis, L., Krüger, V., Eklundh, J.-O., Gasteratos, A. (eds.) ICVS 2015. LNCS, vol. 9163, pp. 375–388. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20904-3_34

    Chapter  Google Scholar 

  21. Wood, E., Baltrusaitis, T., Zhang, X., Sugano, Y., Robinson, P., Bulling, A.: Rendering of eyes for eye-shape registration and gaze estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3756–3764 (2015)

    Google Scholar 

  22. Yang, P., Liu, Q., Metaxas, D.N.: Exploring facial expressions with compositional features. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2638–2644. IEEE (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianming Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ren, S. et al. (2019). Research on Interactive Intent Recognition Based on Facial Expression and Line of Sight Direction. In: Li, J., Wang, S., Qin, S., Li, X., Wang, S. (eds) Advanced Data Mining and Applications. ADMA 2019. Lecture Notes in Computer Science(), vol 11888. Springer, Cham. https://doi.org/10.1007/978-3-030-35231-8_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-35231-8_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-35230-1

  • Online ISBN: 978-3-030-35231-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics