skip to main content
10.1145/2638728.2641691acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Daily activity recognition combining gaze motion and visual features

Published:13 September 2014Publication History

ABSTRACT

Recognition of user activities is a key issue for context-aware computing. We present a method for recognition of user daily activities using gaze motion features and image-based visual features. Gaze motion features dominate for inferring the user's egocentric context whereas image-based visual features dominate for recognition of the environments and the target objects. The experimental results show the fusion of those different type of features improves performance of user daily activity recognition.

References

  1. Biedert, Ralf, Buscher, Georg, and Dengel, Andreas. The eyeBook: Using Eye Tracking to Enhance the Reading Experience. Informatik-Spektrum, 33, 3 (Sep 2009), 272--281.Google ScholarGoogle Scholar
  2. Bolt, Richard A. Eyes at the interface. In Proceedings of the 1982 Conference on Human Factors in Computing Systems (1982), ACM, 360--362. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bulling, Andreas, Ward, Jamie, Gellersen, Hans, and Töster, Gerhard. Eye movement analysis for activity recognition using electrooculography. IEEE transactions on pattern analysis and machine intelligence, 33, 4 (2011), 741--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Chang, Chih-Chung and Lin, Chih-Jen. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2 (2011), 27:1--27:27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Csurka, Gabriella and Dance, C. Visual categorization with bags of keypoints. In Workshop on Statistical Learning in Computer Vision, ECCV (2004), 1--22.Google ScholarGoogle Scholar
  6. Fathi, Alireza, Li, Yin, and Rehg, James M. Learning to Recognize Daily Actions Using Gaze. In Proceedings of the 12th European Conference on Computer Vision - Volume Part I (2012), Springer-Verlag, 314--327. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Hipiny, Irwandi Mohamad and Mayol-Cuevas, Walterio. Recognising Egocentric Activities from Gaze Regions with Multiple-Voting Bag of Words. CSTR-12-003 (2012), 1--15.Google ScholarGoogle Scholar
  8. Hong, Jong-yi, Suh, Eui-ho, and Kim, Sung-Jin. Context-aware systems: A literature review and classification. Expert Systems with Applications, 36, 4 (May 2009), 8509--8522. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Iqbal, Shamsi T and Bailey, Brian P. Using Eye Gaze Patterns to Identify User Tasks. In: The Grace Hopper Celebration of Women in Computing (2004).Google ScholarGoogle Scholar
  10. Moghimi, Mohammad, Azagra, Pablo, Montesano, Luis et al. Experiments on an RGB-D Wearable Vision System for Egocentric Activity Recognition. CVPR Workshop on Egocentric (First-person) Vision (2014).Google ScholarGoogle Scholar
  11. Nowak, Eric. Sampling Strategies for Bag-of-Features. Proceedings of the 9th European Conference on Computer Vision - Volume Part IV (2006), 490--503. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Ogaki, Keisuke, Kitani, Kris M., Sugano, Yusuke, and Sato, Yoichi. Coupling eye-motion and ego-motion features for first-person activity recognition. 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (Jun 2012), 1--7.Google ScholarGoogle ScholarCross RefCross Ref
  13. Sukthankar, R. PCA-SIFT: a more distinctive representation for local image descriptors. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., 2 (2004), 506--513. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Toyama, Takumi, Kieninger, Thomas, Shafait, Faisal, and Dengel, Andreas. Gaze guided object recognition using a head-mounted eye tracker. In Proc. of the Symposium on Eye Tracking Research and Applications (New York, NY, USA 2012), ACM, 91--98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Wu, TF, Lin, CJ, and Weng, RC. Probability estimates for multi-class classification by pairwise coupling. Journal of Machine Learning Research, 5 (2004), 975--1005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Daily activity recognition combining gaze motion and visual features

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      UbiComp '14 Adjunct: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication
      September 2014
      1409 pages
      ISBN:9781450330473
      DOI:10.1145/2638728

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 September 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate764of2,912submissions,26%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader