Skip to main content

Cognitive Behaviour Analysis Based on Facial Information Using Depth Sensors

  • Conference paper
  • First Online:
Understanding Human Activities Through 3D Sensors (UHA3DS 2016)

Abstract

Cognitive behaviour analysis is considered of high importance with many innovative applications in a range of sectors including healthcare, education, robotics and entertainment. In healthcare, cognitive and emotional behaviour analysis helps to improve the quality of life of patients and their families. Amongst all the different approaches for cognitive behaviour analysis, significant work has been focused on emotion analysis through facial expressions using depth and EEG data. Our work introduces an emotion recognition approach using facial expressions based on depth data and landmarks. A novel dataset was created that triggers emotions from long or short term memories. This work uses novel features based on a non-linear dimensionality reduction, t-SNE, applied on facial landmarks and depth data. Its performance was evaluated in a comparative study, proving that our approach outperforms other state-of-the-art features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Baltru, T., Robinson, P., Morency, L.P.: OpenFace: an open source facial behavior analysis toolkit. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10 (2016)

    Google Scholar 

  2. Bettadapura, V.: Face expression recognition and analysis: the state of the art. Technical report arXiv:1203.6722, pp. 1–27 (2012)

  3. Cao, Y., Barrett, D., Barbu, A., Narayanaswamy, S., Yu, H., Michaux, A., Lin, Y., Dickinson, S., Siskind, J.M., Wang, S.: Recognize human activities from partially observed videos. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2658–2665, June 2013

    Google Scholar 

  4. Chaaraoui, A.A., Florez-Revuelta, F.: Optimizing human action recognition based on a cooperative coevolutionary algorithm. Eng. Appl. Artif. Intell. 31, 116–125 (2014)

    Article  Google Scholar 

  5. Chowdhuri, M.A.D., Bojewar, S.: Emotion detection analysis through tone of user: a survey. Emotion 5(5), 859–861 (2016)

    Google Scholar 

  6. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Article  Google Scholar 

  7. Davis, J.W., Tyagi, A.: Minimal-latency human action recognition using reliable-inference. Image Vis. Comput. 24(5), 455–472 (2006)

    Article  Google Scholar 

  8. Ekman, P., Friesen, W.V.: The Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, San Francisco (1978)

    Google Scholar 

  9. Huang, K.C., Huang, S.Y., Kuo, Y.H.: Emotion recognition based on a novel triangular facial feature extraction method. In: 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–6 (2010)

    Google Scholar 

  10. Izard, C.E.: Human Emotions. Springer, Boston (2013). https://doi.org/10.1007/978-1-4899-2209-0

    Book  Google Scholar 

  11. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)

    Google Scholar 

  12. Koelstra, S., Muehl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: DEAP: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)

    Article  Google Scholar 

  13. Kong, Y., Kit, D., Fu, Y.: A discriminative model with multiple temporal scales for action prediction. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 596–611. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_39

    Chapter  Google Scholar 

  14. Lan, T., Chen, T.-C., Savarese, S.: A hierarchical representation for future action prediction. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 689–704. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10578-9_45

    Chapter  Google Scholar 

  15. Li, K., Fu, Y.: ARMA-HMM: a new approach for early recognition of human activity. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 1779–1782, November 2012

    Google Scholar 

  16. Littlewort, G.C., Bartlett, M.S., Lee, K.: Automatic coding of facial expressions displayed during posed and genuine pain. Image Vision Comput. 27(12), 1797–1803 (2009)

    Article  Google Scholar 

  17. Lokannavar, S., Lahane, P., Gangurde, A., Chidre, P.: Emotion recognition using EEG signals. Emotion 4(5), 54–56 (2015)

    Google Scholar 

  18. McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affect. Comput. 3(1), 5–17 (2012)

    Article  Google Scholar 

  19. Michel, P., El Kaliouby, R.: Real time facial expression recognition in video using support vector machines. In: Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 258–264 (2003)

    Google Scholar 

  20. Müller-Putz, G.R., Riedl, R., Wriessnegger, S.C.: Electroencephalography (EEG) as a research tool in the information systems discipline: foundations, measurement, and applications. Commun. Assoc. Inf. Syst. 37(46), 911–948 (2015)

    Google Scholar 

  21. Nicolaou, M.A., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans. Affect. Comput. 2(2), 92–105 (2011)

    Article  Google Scholar 

  22. Nicolle, J., Rapp, V., Bailly, K., Prevost, L., Chetouani, M.: Robust continuous prediction of human emotions using multiscale dynamic cues. In: 14th ACM International Conference on Multimodal Interaction, pp. 501–508 (2012)

    Google Scholar 

  23. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: IEEE International Conference on Multimedia and Expo, pp. 317–321 (2005)

    Google Scholar 

  24. Patras, I., Pantic, M.: Particle filtering with factorized likelihoods for tracking facial features. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 97–102 (2004)

    Google Scholar 

  25. Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from brain signals using hybrid adaptive filtering and higher order crossings analysis. IEEE Trans. Affect. Comput. 1, 81–97 (2010)

    Article  Google Scholar 

  26. Ryoo, M.S.: Human activity prediction: early recognition of ongoing activities from streaming videos. In: International Conference on Computer Vision, ICCV, pp. 1036–1043, November 2011

    Google Scholar 

  27. Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1113 (2015)

    Article  Google Scholar 

  28. Sariyanidi, E., Gunes, H., Gökmen, M., Cavallaro, A.: Local Zernike moment representation for facial affect recognition. In: British Machine Vision Conference (2013)

    Google Scholar 

  29. Sohaib, A.T., Qureshi, S., Hagelbäck, J., Hilborn, O., Jerčić, P.: Evaluating classifiers for emotion recognition using EEG. In: Schmorrow, D.D., Fidopiastis, C.M. (eds.) AC 2013. LNCS (LNAI), vol. 8027, pp. 492–501. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39454-6_53

    Chapter  Google Scholar 

  30. Soleymani, M., Asghari-Esfeden, S., Fu, Y., Pantic, M.: Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Trans. Affect. Comput. 7(1), 17–28 (2016)

    Article  Google Scholar 

  31. Soleymani, M., Lichtenauer, J., Pun, T., Pantic, M.: A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 3(1), 42–55 (2012)

    Article  Google Scholar 

  32. Szwoch, M., Pieniążek, P.: Facial emotion recognition using depth data. In: 2015 8th International Conference on Human System Interaction (HSI), pp. 271–277, June 2015

    Google Scholar 

  33. van Der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  34. Vieriu, R.L., Tulyakov, S., Semeniuta, S., Sangineto, E., Sebe, N.: Facial expression recognition under a wide range of head poses. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, pp. 1–7, May 2015

    Google Scholar 

  35. Vijayan, A.E., Sen, D., Sudheer, A.P.: EEG-based emotion recognition using statistical measures and auto-regressive modeling. In: IEEE International Conference on Computational Intelligence and Communication Technology (CICT), vol. 14, no. 1, pp. 587–591 (2015)

    Google Scholar 

  36. Weninger, F., Wöllmer, M., Schuller, B.: Emotion recognition in naturalistic speech and language-a survey. In: Konar, A., Chakraborty, A. (eds.) Emotion Recognition: A Pattern Analysis Approach, pp. 237–267. Wiley, Hoboken (2015)

    Google Scholar 

  37. Wöllmer, M., Eyben, F., Reiter, S., Schuller, B., Cox, C., Douglas-Cowie, E., Cowie, R.: Abandoning emotion classes-towards continuous emotion recognition with modelling of long-range dependencies. In: Interspeech, pp. 597–600 (2008)

    Google Scholar 

  38. Yan, W.J., Li, X., Wang, S.J., Zhao, G., Liu, Y.J., Chen, Y.H., Fu, X.: CASME II: an improved spontaneous micro-expression database and the baseline evaluation. PloS One 9(1), e86041 (2014)

    Article  Google Scholar 

  39. Zhao, G., Pietikäinen, M.: Boosted multi-resolution spatiotemporal descriptors for facial expression recognition. Pattern Recogn. Lett. 30(12), 1117–1127 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vasileios Argyriou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Montenegro, J.M.F., Villarini, B., Gkelias, A., Argyriou, V. (2018). Cognitive Behaviour Analysis Based on Facial Information Using Depth Sensors. In: Wannous, H., Pala, P., Daoudi, M., Flórez-Revuelta, F. (eds) Understanding Human Activities Through 3D Sensors. UHA3DS 2016. Lecture Notes in Computer Science(), vol 10188. Springer, Cham. https://doi.org/10.1007/978-3-319-91863-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91863-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91862-4

  • Online ISBN: 978-3-319-91863-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics