Skip to main content

User Modelling: An Empirical Study for Affect Perception Through Keyboard and Speech in a Bi-modal User Interface

  • Conference paper
Adaptive Hypermedia and Adaptive Web-Based Systems (AH 2006)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4018))

  • 1376 Accesses

Abstract

This paper presents and discusses an empirical study that has been conducted among different kinds of computer users. The aim of the empirical study was to find out how computer users react when they face situations which generate emotions while they interact with a computer. The study has focused on two modes of human-computer interaction namely input from the keyboard and the microphone. The results of the study have been analyzed in terms of characteristics of users that have taken part (age, educational level, computer knowledge, etc.). These results were used to create a user modeling component that monitors users silently and records their actions (in the two modes of interaction) which are then interpreted in terms of their feelings. This user modeling component can be incorporated in any application that provides adaptive interaction to users based on affect perception.

Support for this work was provided by the General Secretariat of Research and Technology, Greece, under the auspices of the PENED program.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Brusilovsky, P.: Adaptive Hypermedia, User Modeling and User-Adapted Interaction, vol. 11, pp. 87–110. Springer Science+Business Media B.V, Heidelberg (2001)

    Google Scholar 

  2. Oviatt, S.: User-modeling and evaluation of multimodal interfaces. In: Proceedings of the IEEE, pp. 1457–1468 (2003)

    Google Scholar 

  3. Pantic, M., Rothkrantz, L.J.M.: Toward an affect-sensitive multimodal human-cumputer interaction. In: Proceedings of the IEEE, vol. 91, pp. 1370–1390 (2003)

    Google Scholar 

  4. Sharma, R., Yeasin, M., Krahnstoever, N., Rauschert, I., Cai, G., Brewer, I., Maceachren, A.M., Sengupta, K.: Speech-Gesture driven multimodal interfaces for crisis management. In: Proceedings of the IEEE, vol. 91, pp. 1327–1354 (2003)

    Google Scholar 

  5. Stathopoulou, I.O., Tsihrintzis, G.A.: Detection and Expression Classification System for Face Images (FADECS). In: IEEE Workshop on Signal Processing Systems, Athens, Greece (November 2-4, 2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Alepis, E., Virvou, M. (2006). User Modelling: An Empirical Study for Affect Perception Through Keyboard and Speech in a Bi-modal User Interface. In: Wade, V.P., Ashman, H., Smyth, B. (eds) Adaptive Hypermedia and Adaptive Web-Based Systems. AH 2006. Lecture Notes in Computer Science, vol 4018. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11768012_45

Download citation

  • DOI: https://doi.org/10.1007/11768012_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34696-8

  • Online ISBN: 978-3-540-34697-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics