Skip to main content

On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

  • Conference paper
  • First Online:

Abstract

Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alepis, E., Virvou, M., Kabassi, K.: Affective student modeling based on microphone and keyboard user actions. In: ICALT ’06: Proceedings of the Sixth IEEE International Conference on Advanced Learning Technologies, pp. 139–141. IEEE Computer Society,Washington, DC, USA (2006)

    Google Scholar 

  2. Berthouze, B.N., Kleinsmith, A.: A categorical approach to affective gesture recognition. Connection Science 15(4), 259–269 (2003). http://eprints.ucl.ac.uk/3368/

    Google Scholar 

  3. Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: ICMI ’04: Proceedings of the 6th international conference on Multimodal interfaces, pp. 205–211. ACM, New York, NY, USA (2004). http://doi.acm.org/10.1145/1027933.1027968

  4. Camurri, A., Lagerlöf, I., Volpe, G.: Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques. Int. J. Hum.-Comput. Stud. 59(1-2), 213–225 (2003). http://dx.doi.org/10.1016/S1071-5819(03)00050-8

    Google Scholar 

  5. Chen, L.S., Huang, T.S., Miyasato, T., Nakatsu, R.: Multimodal human emotion/expression recognition. In: Proc. Int’l Conf. Automatic Face and Gesture Recognition, pp. 366–371 (1998)

    Google Scholar 

  6. Cowie, R., Douglas-cowie, E.: Automatic statistical analysis of the signal and prosodic signs of emotion in speech (1989)

    Google Scholar 

  7. Damasio, A.R.: Emotion in the perspective of an integrated nervous system. Brain Research Reviews 26, 83–86 (1998)

    Article  Google Scholar 

  8. Damasio, A.R.: Fundamental feelings. Nature 413, 781 (2001)

    Article  Google Scholar 

  9. Davidson, R., Pizzagalli, D., Nitschke, J., Kalin, N.: Handbook of Affective Sciences, chap. Parsing the subcomponents of emotion and disorders of emotion: perspectives from affective neuroscience. Oxford University Press, USA (2003)

    Google Scholar 

  10. Davidson, R., Scherer, K., Goldsmith, H.: andbook of Affective Sciences. Oxford, USA (2003)

    Google Scholar 

  11. De Silva, L., Miyasato, T., Nakatsu, R.: Facial Emotion Recognition Using Multimodal Information. In: Proceedings of IEEE Int. Conf. on Information, Communications and Signal Processing - ICICS. Singapore, Thailand (1997)

    Google Scholar 

  12. Goleman, D.: Emotional Intelligence. Bantam Books, New York, USA

    Google Scholar 

  13. Graf, H., Cosatto, E., Strom, V., Huang, F.: Visual prosody: Facial movements accompanying speech. In: 5th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 381–386 (2002)

    Google Scholar 

  14. Gunes, H., Piccardi, M.: A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In: ICPR ’06: Proceedings of the 18th International Conference on Pattern Recognition, pp. 1148–1153. IEEE Computer Society, Washington, DC, USA (2006). http://dx.doi.org/10.1109/ICPR.2006.39

  15. Isbister, K., Hook, K.: Evaluating affective interactions (introduction to special issue). International journal of human-computer studies 65(4), 273–274 (2007)

    Article  Google Scholar 

  16. Kaliouby, R., Robinson, P.: Generalization of a vision-based computational model of mindreading. pp. 582–589 (2005). 10.1007/11573548 75. http://dx.doi.org/10.1007/11573548 75

  17. Liao, W., Zhang, W., Zhu, Z., Ji, Q., Gray, W.D.: Toward a decision-theoretic framework for affect recognition and user assistance. Int. J. Hum.-Comput. Stud. 64(9), 847–873 (2006). http://dx.doi.org/10.1016/j.ijhcs.2006.04.001

    Google Scholar 

  18. Oviatt, S.: User-centered modeling and evaluation of multimodal interfaces. IEEE Proceedings 91(B), 1457–1468 (2003)

    Article  Google Scholar 

  19. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 1424–1445 (2000)

    Article  Google Scholar 

  20. Pantic, M., Rothkrantz, L.J.M.: Toward an affect-sensitive multimodal human-computer interaction. In: Proceedings of the IEEE, pp. 1370–1390 (2003)

    Google Scholar 

  21. Pantic, M., Rothkrantz, L.J.M.: Toward an affect-sensitive multimodal human-computer interaction. Proceedings of the IEEE 91(9), 1370–1390 (2003). 10.1109/JPROC.2003.817122

    Article  Google Scholar 

  22. Picard, R.: Affective computing: challenges. Internationa Journal of Human-Computer Studies 59(1-2), 55–64 (2003). 10.1016/S1071-5819(03)00052-1

    Article  Google Scholar 

  23. Picard, R.W., Vyzas, E., Healey, J.: Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 1175–1191 (2001)

    Article  Google Scholar 

  24. Pierrakos, D., Papatheodorou, G.P.C., Spyropoulos, C.: Web usage mining as a tool for personalization: A survey. User Modeling and User Adapted Interaction 13(4), 311–372 (2003)

    Article  Google Scholar 

  25. Scherer, K.R.: Adding the affective dimension: A new look in speech analysis and synthesis. pp. 1808–1811 (1996)

    Google Scholar 

  26. Stathopoulou, I.O., Tsihrintzis, G.: A neural network-based facial analysis system. In: Proceedings of the 5th International Workshop on Image Analysis for Multimedia Interactive Services. Lisboa, Portugal (2004)

    Google Scholar 

  27. Stathopoulou, I.O., Tsihrintzis, G.: An Improved Neural Network-Based Face Detection and Facial Expression Classification System. In: IEEE International Conference on Systems, Man, and Cybernetics. The Hague, Netherlands (2004)

    Google Scholar 

  28. Stathopoulou, I.O., Tsihrintzis, G.: Detection and Expression Classification Systems for Face Images (FADECS). In: Proceedings of the IEEE Workshop on Signal Processing Systems (SiPS05). Athens, Greece (2005)

    Google Scholar 

  29. Stathopoulou, I.O., Tsihrintzis, G.: Evaluation of the Discrimination Power of Features Extracted from 2-D and 3-D Facial Images for Facial Expression Analysis. In: Proceedings of the 13th European Signal Processing Conference. Antalya, Turkey (2005)

    Google Scholar 

  30. Stathopoulou, I.O., Tsihrintzis, G.: Pre-processing and expression classification in low quality face images. In: Proceedings of 5th EURASIP Conference on Speech and Image Processing, Multimedia Communications and Services (2005)

    Google Scholar 

  31. Stathopoulou, I.O., Tsihrintzis, G.: An Accurate Method for eye detection and feature extraction in face color images. In: Proceedings of the 13th International Conference on Signals, Systems, and Image Processing. Budapest, Hungary (2006)

    Google Scholar 

  32. Stathopoulou, I.O., Tsihrintzis, G.: Facial Expression Classification: Specifying Requirements for an Automated System. In: Proceedings of the 10th International Conference on Knowledge-Based Intelligent Information Engineering Systems, LNAI: Vol. 4252, pp. 1128–1135. Springer-Verlag, Berlin, Heidelberg (2006). http://dx.doi.org/10.1007/11893004

  33. Stathopoulou, I.O., Tsihrintzis, G.A.: Neu-faces: A neural network-based face image analysis system. In: ICANNGA ’07: Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part II, LNCS: Vol. 4432, pp. 449–456. Springer-Verlag, Berlin, Heidelberg (2007). http://dx.doi.org/10.1007/978-3-540-71629-751

  34. Stathopoulou, I.O., Tsihrintzis, G.A.: Comparative performance evaluation of artificial neural network-based vs. human facial expression classifiers for facial expression recognition. In: KES-IMSS 2008: 1st International Symposium on Intelligent Interactive Multimedia Systems and Services, SCI: Vol. 142, pp. 55–65. Springer-Verlag, Berlin, Heidelberg (2008). http://dx.doi.org/10.1007/978-3-540-68127-4

  35. Virvou, M., Tsihrintzis, G.A., Alepis, E., Stathopoulou, I.O., Kabassi, K.: Combining empirical studies of audio-lingual and visual-facial modalities for emotion recognition. In: KES ’07: Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International Conference, LNAI: Vol. 4693, pp. 1130–1137. Springer-Verlag, Berlin, Heidelberg (2007). http://dx.doi.org/10.1007/978-3-540-74827-4141

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to I.-O. Stathopoulou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag London

About this paper

Cite this paper

Stathopoulou, IO., Alepis, E., Tsihrintzis, G., Virvou, M. (2010). On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information. In: Bramer, M., Ellis, R., Petridis, M. (eds) Research and Development in Intelligent Systems XXVI. Springer, London. https://doi.org/10.1007/978-1-84882-983-1_35

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-983-1_35

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-982-4

  • Online ISBN: 978-1-84882-983-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics