skip to main content
10.1145/2525194.2525206acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Interacting with a self-portrait camera using motion-based hand gestures

Published:24 September 2013Publication History

ABSTRACT

Taking self-portraits with a digital camera is a popular way to present oneself through photography. Traditional techniques for taking self-portraits, such as use of self-timers or face detection, provide only a modest degree of interaction between the user and camera. In this paper, we present an interaction technique that make novel use of image-processing algorithm to recognize hand motion gestures and provides user a natural way to interact with camera for taking self-portraits. User can perform nature gestures to control essential functions of camera and take self-portraits effectively. Three types of gesture (i.e., waving, eight-direction selection, and circling were identified and applied to develop a gesture user interface for controlling a Digital Single-Lens Reflex (DSLR) camera. Two experiments were conducted to evaluate the usability and performance of the gesture interface. The results confirmed that the usability of the gesture interface is superior to a self-timer and the proposed technique achieved about 80% accurate recognition of motion gestures.

References

  1. Huang, L., Xia, T., Wan, J., Zhang, Y., and Lin, S. Personalized portraits ranking. In Proceedings of the 19th ACM international conference on Multimedia, ACM Press (2011), 1277--1280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Okabe, D., Ito, M., Chipchase, J., and Shimizu, A. The social uses of purikura: photographing, modding, archiving, and sharing. In Pervasive Image Capture and Sharing Workshop, Ubiquitous Computing Conference (2006), 2--5.Google ScholarGoogle Scholar
  3. 4 tips for taking gorgeous self-portrait and outfit photos. http://www.shrimpsaladcircus.com/2012/01/self-portrait-outfit-photography-guide.html.Google ScholarGoogle Scholar
  4. Taking a great self portrait with your camera. http://www.squidoo.com/self-portrait-tips.Google ScholarGoogle Scholar
  5. Adams, A., Talvala, E.-V., Park, S. H., Jacobs, D. E., Ajdin, B., Gelfand, N., Dolson, J., Vaquero, D., Baek, J., Tico, M., Lensch, H. P. A., Matusik, W., Pulli, K., Horowitz, M., and Levoy, M. The frankencamera: an experimental platform for computational photography. ACM Transactions on Graphics 29, 4 (2010), 29:1--29:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Gomez, S. R. Interacting with live preview frames: in-picture cues for a digital camera interface. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology (2010), 419--420. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Chu, S., and Tanaka, J. Hand gesture for taking self portrait. In Proceedings of the 14th international conference on Human-computer interaction: interaction techniques and environments - Volume Part II (2011), 238--247. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Chu, S., and Tanaka, J. Head nod and shake gesture interface for a self-portrait camera. In ACHI 2012, The Fifth International Conference on Advances in Computer-Human Interactions (2012), 112--117.Google ScholarGoogle Scholar
  9. Nikon coolpix s1200pj camera. http://www.nikonusa.com/en/Learn-And-Explore/Article/g022fmeg/Built-in-Projector.html.Google ScholarGoogle Scholar
  10. Bayazit, M., Couture-beil, A., and Mori, G. Real-time motion-based gesture recognition using the GPU. In IAPR Conference on Machine Vision Applications (MVA) (2009), 9--12.Google ScholarGoogle Scholar
  11. Wachs, J. P., Kolsch, M., Stern, H., and Edan, Y. Vision-based hand-gesture applications. Communications of the ACM 54, 2 (2011), 60--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Moeslund, T. B., Hilton, A., and Kruger, V. A survey of advances in vision-based human motion capture and analysis. Journal of Computer Vision and Image Understanding 104, 2 (2006), 90--126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Wilson, A. D. Robust computer vision-based detection of pinching for one and two-handed gesture input. In Proceedings of the 19th annual ACM symposium on User interface software and technology (2006), 255--258. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Mistry, P., and Maes, P. Sixthsense: a wearable gestural interface. In ACM SIGGRAPH ASIA 2009 Sketches (2009), 11:1--11:1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Lee, T., and Hollerer, T. Handy AR: Markerless inspection of augmented reality objects using fingertip tracking. In Wearable Computers, 2007 11th IEEE International Symposium on (2007), 1--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Wang, R., Paris, S., and Popovic, J. 6D hands: markerless hand-tracking for computer aided design. In Proceedings of the 24th annual ACM symposium on User interface software and technology (2011), 549--558. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wang, R. Y., and Popovic, J. Real-time hand-tracking with a color glove. ACM Transactions on Graphics 28, 3 (2012), 63:1--63:8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Stenger, B., Woodley, T., and Cipolla, R. A vision-based remote control. In Studies in Computational Intelligence, vol. 285 (2010), 233--262.Google ScholarGoogle Scholar
  19. Baker, S., and Matthews, I. Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision 56, 3 (2004), 221--255. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Bruhn, A., Weickert, J., and Schnorr, C. Lucas/kanade meets horn/schunck: combining local and global optic flow methods. International Journal of Computer Vision 61, 3 (2005), 211--231. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Chen, M., Mummert, L., Pillai, P., Hauptmann, A., and Sukthankar, R. Controlling your TV with gestures. In MIR 2010: 11th ACM SIGMM International Conference on Multimedia Information Retrieval (2010), 405--408. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Fathi, A., and Mori, G. Action recognition by learning mid-level motion features. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008) (2008), 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  23. Takahashi, M., Fujii, M., Naemura, M., and Satoh, S. Human gesture recognition using 3.5-dimensional trajectory features for hands-free user interface. In Proceedings of the first ACM international workshop on Analysis and retrieval of tracked events and motion in imagery streams (2010), 3--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Hardy, J., Rukzio, E., and Davies, N. Real world responses to interactive gesture based public displays. In Proceedings of the 10th International Conference on Mobile and Ubiquitous Multimedia (2011), 33--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Kim, J., Mastnik, S., and Andre, E. EMG-based hand gesture recognition for realtime biosignal interfacing. In Proceedings of the 13th international conference on Intelligent user interfaces (2008), 30--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Atia, A., and Tanaka, J. Interaction with tilting gestures in ubiquitous environments. International Journal of UbiComp 1, 3 (2010), 1--13.Google ScholarGoogle Scholar
  27. Eisenstein, J., and Mackay, W. E. Interacting with communication appliances: an evaluation of two computer vision-based selection techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2006), 1111--1114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Open source computer vision library (OpenCV). http://opencv.org/.Google ScholarGoogle Scholar
  29. Canon digital camera software developers kit. http://usa.canon.com/cusa/consumer/standard_display/sdk_homepage.Google ScholarGoogle Scholar
  30. Fiss, J., Agarwala, A., and Curless, B. Candid portrait selection from video. ACM Transactions on Graphics 30, 6 (2011), 128:1--128:8. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Interacting with a self-portrait camera using motion-based hand gestures

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      APCHI '13: Proceedings of the 11th Asia Pacific Conference on Computer Human Interaction
      September 2013
      420 pages
      ISBN:9781450322539
      DOI:10.1145/2525194

      Copyright © 2013 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 24 September 2013

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Upcoming Conference

      CHI '24
      CHI Conference on Human Factors in Computing Systems
      May 11 - 16, 2024
      Honolulu , HI , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader