skip to main content
10.1145/2851613.2851681acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Multimodal human attention detection for reading

Authors Info & Claims
Published:04 April 2016Publication History

ABSTRACT

Affective computing in human-computer interaction research enables computers to understand human affects or emotions to provide better service. In this paper, we investigate the detection of human attention useful in intelligent e-learning applications. Our principle is to use only ubiquitous hardware available in most computer systems, namely, webcam and mouse. Information from multiple modalities is fused together for effective human attention detection. We invite human subjects to carry out experiments in reading articles being subjected to different kinds of distraction to induce different attention levels. Machine-learning techniques are applied to identify useful features to recognize human attention level. Our results indicate improved performance with multimodal inputs, suggesting an interesting affective computing direction.

References

  1. Calvo, R. and Mello, S. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1):18--37, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Picard, R. Affective Computing. The MIT Press, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Zeng, Z., Pantic, M., Roisman, G. and Huang, T. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39--58, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Oviatt, S. Multimodal interfaces. Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, L. Erlbaum Assoc. Inc., pp. 286--304, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Cunningham, D. W., Kleiner, M., Bülthoff, H. H. and Wallraven, C. The components of conversational facial expressions. In Proc. of Symposium on Applied Perception in Graphics and Visualization, ACM, pp. 143--150, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ekman, P. Universals and cultural differences in facial expression of emotion. In Proc. of Nebraska Symposium on Motivation, pp. 207--283, 1972.Google ScholarGoogle Scholar
  7. Ambadar, Z., Cohn, J. F. and Reed, L. I. All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of Nonverbal Behavior, 33(1):17--34, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  8. Anderson, A., Christenson, S., Sinclair, M. and Lehr, C. Check and connect: The importance of relationships for promoting engagement with school. Journal of School Psychology, 42:95--113, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  9. Yamauchi, T. Mouse trajectories and state anxiety: Feature selection with random forest. In Proc. of ACII, pp. 399--404, IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Tsoulouhas, G., Georgiou, D. and Karakos, A. Detection of learners' affective state based on mouse movements. Journal of Computing, 3(11):9--18, 2011.Google ScholarGoogle Scholar
  11. Bixler, R. and D'Mello, S. Detecting boredom and engagement during writing with keystroke analysis, task appraisals, and stable traits. In Proc. of IUI, pp. 225--, ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Bolt, R. Put-that-there: Voice and gesture at the graphics interface. ACM SIGGRAPH Computer Graphics, 14(3):262--270, 1980. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Saragih, J. M., Lucey, S. and Cohn, J. F. Deformable model fitting by regularized landmark man-shift. International Journal of Computer Vision, 91(2):200--215, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Gross, R., Matthews, I., Cohn, J., Kanade, T. and Baker, S. Multi-PIE. In Proc. of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 1--8, 2008.Google ScholarGoogle Scholar
  15. Xiong, X. and De la Torre, F. Supervised descent method and its applications to face alignment. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 532--539, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Tsalakanidou, F. and Malassiotis, S. Real-time 2D+3D facial action and expression recognition. Pattern Recognition, 43(5):1763--1775, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Talavera, L. An evaluation of filter and wrapper methods for feature selection in categorical clustering. In Proc. of International Symposium on Intelligent Data Analysis, pp. 440--451, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Stroupe, G. Comprehension and time difference across no-music, lyrical music, and non-lyrical music groups. Department of Sociology, Emory University, 2005.Google ScholarGoogle Scholar
  19. Whitehill, J., Serpell, Z., Lin, Y.-C., Foster, A. and Movellan, J. R. The faces of engagement: Automatic recognition of student engagement from facial expressions. IEEE Transactions on Affective Computing, 5(1):86--98, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  20. Fu, Y., Leong, H. V., Ngai, G., Huang, M. X. and Chan, S. C. F. Physiological mouse: Towards an emotion-aware mouse. Universal Access in Information Society: An International Journal, Springer.Google ScholarGoogle Scholar

Index Terms

  1. Multimodal human attention detection for reading

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SAC '16: Proceedings of the 31st Annual ACM Symposium on Applied Computing
      April 2016
      2360 pages
      ISBN:9781450337397
      DOI:10.1145/2851613

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 April 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      SAC '16 Paper Acceptance Rate252of1,047submissions,24%Overall Acceptance Rate1,650of6,669submissions,25%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader