skip to main content
10.1145/2702123.2702464acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices

Published: 18 April 2015 Publication History

Abstract

This paper presents Cyclops, a single-piece wearable device that sees its user's whole body postures through an ego-centric view of the user that is obtained through a fisheye lens at the center of the user's body, allowing it to see only the user's limbs and interpret body postures effectively. Unlike currently available body gesture input systems that depend on external cameras or distributed motion sensors across the user's body, Cyclops is a single-piece wearable device that is worn as a pendant or a badge. The main idea proposed in this paper is the observation of limbs from a central location of the body. Owing to the ego-centric view, Cyclops turns posture recognition into a highly controllable computer vision problem. This paper demonstrates a proof-of-concept device, and an algorithm for recognizing static and moving bodily gestures based on motion history images (MHI) and a random decision forest (RDF). Four example applications of interactive bodily workout, a mobile racing game that involves hands and feet, a full-body virtual reality system, and interaction with a tangible toy are presented. The experiment on the bodily workout demonstrates that, from a database of 20 body workout gestures that were collected from 20 participants, Cyclops achieved a recognition rate of 79% using MHI and simple template matching, which increased to 92% with the more advanced machine learning approach of RDF.

Supplementary Material

suppl.mov (pn1809-file3.mp4)
Supplemental video
MP4 File (p3001-chen.mp4)

References

[1]
Ahad, M. A. R., Tan, J. K., Kim, H., and Ishikawa, S. Motion history image: Its variants and applications. Mach. Vision Appl. 23, 2 (Mar. 2012), 255--281.
[2]
Bailly, G., Müller, J., Rohs, M., Wigdor, D., and Kratz, S. Shoesense: A new perspective on gestural interaction and wearable applications. In Proc. ACM CHI '12 (2012), 1239--1248.
[3]
Bobick, A. F., and Davis, J. W. The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23, 3 (Mar. 2001), 257--267.
[4]
Chan, L., Liang, R.-H., Tsai, M.-C., Cheng, K.-Y., Su, C.-H., Chen, M. Y., Cheng, W.-H., and Chen, B.-Y. Fingerpad: Private and subtle interaction using fingertips. In Proc. ACM UIST '13 (2013), 255--260.
[5]
Chen, K.-Y., Lyons, K., White, S., and Patel, S. utrack: 3d input using two magnetic sensors. In Proc. ACM UIST '13 (2013), 237--244.
[6]
Cohn, G., Morris, D., Patel, S., and Tan, D. Humantenna: Using the body as an antenna for real-time whole-body interaction. In Proc. ACM CHI '12 (2012), 1901--1910.
[7]
Colaço, A., Kirmani, A., Yang, H. S., Gong, N.-W., Schmandt, C., and Goyal, V. K. Mime: Compact, low power 3d gesture sensing for interaction with head mounted displays. In Proc. ACM UIST '13 (2013), 227--236.
[8]
Criminisi, A., and Shotton, J. Decision Forests for Computer Vision and Medical Image Analysis. Springer, 2013.
[9]
Dernbach, S., Das, B., Krishnan, N. C., Thomas, B., and Cook, D. Simple and complex activity recognition through smart phones. In IEEE IE '02 (June 2012), 214--221.
[10]
Fanello, S. R., Keskin, C., Izadi, S., Kohli, P., Kim, D., Sweeney, D., Criminisi, A., Shotton, J., Kang, S. B., and Paek, T. Learning to be a depth camera for close-range human capture and interaction. ACM Trans. Graph. 33, 4 (July 2014), 86:1--86:11.
[11]
Gustafson, S., Bierwirth, D., and Baudisch, P. Imaginary interfaces: Spatial interaction with empty hands and without visual feedback. In Proc. ACM UIST '10 (2010), 3--12.
[12]
Harrison, C., Benko, H., and Wilson, A. D. Omnitouch: Wearable multitouch interaction everywhere. In Proc. ACM UIST '11 (2011), 441--450.
[13]
Harrison, C., Tan, D., and Morris, D. Skinput: Appropriating the body as an input surface. In Proc. ACM CHI '10 (2010), 453--462.
[14]
Junker, H., Amft, O., Lukowicz, P., and Tröster, G. Gesture spotting with body-worn inertial sensors to detect user activities. Pattern Recogn. 41, 6 (June 2008), 2010--2024.
[15]
Keally, M., Zhou, G., Xing, G., Wu, J., and Pyles, A. Pbn: Towards practical activity recognition using smartphone-based body sensor networks. In Proc. ACM SenSys '11 (2011), 246--259.
[16]
Kim, D., Hilliges, O., Izadi, S., Butler, A. D., Chen, J., Oikonomidis, I., and Olivier, P. Digits: Freehand 3d interactions anywhere using a wrist-worn gloveless sensor. In Proc. ACM UIST '12 (2012), 167--176.
[17]
Kwapisz, J. R., Weiss, G. M., and Moore, S. A. Activity recognition using cell phone accelerometers. SIGKDD Explor. Newsl. 12, 2 (Mar. 2011), 74--82.
[18]
Lee, J., and Ha, I. Real-time motion capture for a human body using accelerometers. Robotica 19, 6 (Sept. 2001), 601--610.
[19]
Lin, S.-Y., Su, C.-H., Cheng, K.-Y., Liang, R.-H., Kuo, T.-H., and Chen, B.-Y. Pub - point upon body: Exploring eyes-free interaction and methods on an arm. In Proc. ACM UIST '11 (2011), 481--488.
[20]
Mayol-Cuevas, W. W., Tordoff, B. J., and Murray, D. W. On the choice and placement of wearable vision sensors. Trans. Sys. Man Cyber. Part A 39, 2 (Mar. 2009), 414--425.
[21]
Mistry, P., and Maes, P. Sixthsense: A wearable gestural interface. In Proc. ACM SIGGRAPH ASIA '09 Sketches (2009), 11:1--11:1.
[22]
Mistry, P., Maes, P., and Chang, L. Wuw - wear ur world: A wearable gestural interface. In Proc. ACM CHI EA '09 (2009), 4111--4116.
[23]
Morris, D., Saponas, T. S., Guillory, A., and Kelner, I. Recofit: Using a wearable sensor to find, recognize, and count repetitive exercises. In Proc. ACM CHI '14 (2014), 3225--3234.
[24]
Mujibiya, A., Cao, X., Tan, D. S., Morris, D., Patel, S. N., and Rekimoto, J. The sound of touch: On-body touch and gesture sensing based on transdermal ultrasound propagation. In Proc. ACM ITS '13 (2013), 189--198.
[25]
Ogata, M., Sugiura, Y., Makino, Y., Inami, M., and Imai, M. Senskin: Adapting skin as a soft interface. In Proc. ACM UIST '13 (2013), 539--544.
[26]
Rekimoto, J. Gesturewrist and gesturepad: Unobtrusive wearable interaction devices. In Proc. IEEE ISWC '01 (2001), 21--.
[27]
Saponas, T. S., Tan, D. S., Morris, D., Balakrishnan, R., Turner, J., and Landay, J. A. Enabling always-available input with muscle-computer interfaces. In Proc. ACM UIST '09 (2009), 167--176.
[28]
Sato, M., Poupyrev, I., and Harrison, C. Touche: Enhancing touch interaction on humans, screens, liquids, and everyday objects. In Proc. ACM CHI '12 (2012), 483--492.
[29]
Scott, J., Dearman, D., Yatani, K., and Truong, K. N. Sensing foot gestures from the pocket. In Proc. ACM UIST '10 (2010), 199--208.
[30]
Shiratori, T., Park, H. S., Sigal, L., Sheikh, Y., and Hodgins, J. K. Motion capture from body-mounted cameras. In Proc. ACM SIGGRAPH '11 (2011), 31:1--31:10.
[31]
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., and Blake, A. Real-time human pose recognition in parts from single depth images. In Proc. IEEE CVPR '11 (2011), 1297--1304.
[32]
Song, J., Sörös, G., Pece, F., Fanello, S. R., Izadi, S., Keskin, C., and Hilliges, O. In-air gestures around unmodified mobile devices. In Proc. ACM UIST '14 (2014), 319--329.
[33]
Sturman, D. J., and Zeltzer, D. A survey of glove-based input. IEEE Comput. Graph. Appl. 14, 1 (Jan. 1994), 30--39.
[34]
Tamaki, E., Miyaki, T., and Rekimoto, J. Brainy hand: An ear-worn hand gesture interaction device. In Proc. ACM CHI EA '09 (2009), 4255--4260.
[35]
Taylor, S., Keskin, C., Hilliges, O., Izadi, S., and Helmes, J. Type-hover-swipe in 96 bytes: A motion sensing mechanical keyboard. In Proc. ACM CHI '14 (2014), 1695--1704.
[36]
Vardy, A., Robinson, J., and Cheng, L.-T. The wristcam as input device. In Proc. IEEE ISWC '99 (1999), 199--202.

Cited By

View all
  • (2024)BodyTouchProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314267:4(1-22)Online publication date: 12-Jan-2024
  • (2023)OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional CameraProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580747(1-18)Online publication date: 19-Apr-2023
  • (2022)Cross-Scene Sign Language Gesture Recognition Based on Frequency-Modulated Continuous Wave RadarSignals10.3390/signals30400523:4(875-894)Online publication date: 6-Dec-2022
  • Show More Cited By

Index Terms

  1. Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
    April 2015
    4290 pages
    ISBN:9781450331456
    DOI:10.1145/2702123
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 April 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ego-centric view
    2. full-body gesture input
    3. posture recognition
    4. single-point wearable devices

    Qualifiers

    • Research-article

    Conference

    CHI '15
    Sponsor:
    CHI '15: CHI Conference on Human Factors in Computing Systems
    April 18 - 23, 2015
    Seoul, Republic of Korea

    Acceptance Rates

    CHI '15 Paper Acceptance Rate 486 of 2,120 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)27
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 10 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)BodyTouchProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314267:4(1-22)Online publication date: 12-Jan-2024
    • (2023)OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional CameraProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580747(1-18)Online publication date: 19-Apr-2023
    • (2022)Cross-Scene Sign Language Gesture Recognition Based on Frequency-Modulated Continuous Wave RadarSignals10.3390/signals30400523:4(875-894)Online publication date: 6-Dec-2022
    • (2022)A SLAM-based 6DoF controller with smooth auto-calibration for virtual realityThe Visual Computer10.1007/s00371-022-02530-139:9(3873-3886)Online publication date: 13-Jun-2022
    • (2022)A Review on Hand Gesture and Sign Language Techniques for Hearing Impaired PersonMachine Learning Techniques for Smart City Applications: Trends and Solutions10.1007/978-3-031-08859-9_4(35-44)Online publication date: 20-Sep-2022
    • (2021)Portable 3D Human Pose Estimation for Human-Human Interaction using a Chest-Mounted Fisheye CameraProceedings of the Augmented Humans International Conference 202110.1145/3458709.3458986(116-120)Online publication date: 22-Feb-2021
    • (2021)Independent Control of Supernumerary Appendages Exploiting Upper Limb RedundancyProceedings of the Augmented Humans International Conference 202110.1145/3458709.3458980(19-30)Online publication date: 22-Feb-2021
    • (2021)ViFinProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481195:1(1-25)Online publication date: 19-Mar-2021
    • (2021)Increasing Access to Trainer-led Aerobic Exercise for People with Visual Impairments through a Sensor Mat SystemProceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3441852.3476557(1-4)Online publication date: 17-Oct-2021
    • (2021)Auth+Track: Enabling Authentication Free Interaction on Smartphone by Continuous User TrackingProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445624(1-16)Online publication date: 6-May-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media