skip to main content
10.1145/3025453.3025478acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Hybrid HFR Depth: Fusing Commodity Depth and Color Cameras to Achieve High Frame Rate, Low Latency Depth Camera Interactions

Published: 02 May 2017 Publication History

Abstract

The low frame rate and high latency of consumer depth cameras limits their use in interactive applications. We propose combining the Kinect depth camera with an ordinary color camera to synthesize a high frame rate and low latency depth image. We exploit common CMOS camera region of interest (ROI) functionality to obtain a high frame rate image over a small ROI. Motion in the ROI is computed by a fast optical flow implementation. The resulting flow field is used to extrapolate Kinect depth images to achieve high frame rate and low latency depth, and optionally predict depth to further reduce latency. Our "Hybrid HFR Depth" prototype generates useful depth images at maximum 500Hz with minimum 20ms latency. We demonstrate Hybrid HFR Depth in tracking fast moving objects, handwriting in the air, and projecting onto moving hands. Based on commonly available cameras and image processing implementations, Hybrid HFR Depth may be useful to HCI practitioners seeking to create fast, fluid depth camera-based interactions.

Supplementary Material

suppl.mov (pn1127-file3.mp4)
Supplemental video
suppl.mov (pn1127p.mp4)
Supplemental video

References

[1]
Hrvoje Benko, Andrew D. Wilson, and Federico Zannier. 2014. Dyadic projected spatial augmented reality. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 645--655. ACM, 2014.
[2]
Thomas Brox, Andrés Bruhn, Nils Papenberg, and Joachim Weickert. 2004. High accuracy optical flow estimation based on a theory for warping. In European conference on computer vision, pp. 25--36.
[3]
Sean Ryan Fanello, Christoph Rhemann, Vladimir Tankovich, A. Kowdle, S. Orts Escolano, D. Kim, and S. Izadi. 2016. Hyperdepth: Learning depth from structured light without matching. In CVPR, vol. 2, p. 7. 2016.
[4]
Brett R. Jones, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2013. IllumiRoom: peripheral projected illusions for interactive experiences. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 869--878.
[5]
Brett R. Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew Wilson, Eyal Ofek, Blair MacIntyre, Nikunj Raghuvanshi, and Lior Shapira. 2014. RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 637--644.
[6]
Ricardo Jota, Albert Ng, Paul Dietz, and Daniel Wigdor. 2013. How fast is fast enough?: a study of the effects of latency in direct-touch pointing tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 2291--2300.
[7]
Kris Kitani, Kodai Horita, and Hideki Koike. 2012. BallCam!: dynamic view synthesis from spinning cameras. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology, pp. 87--88.
[8]
Jarrod Knibbe, Hrvoje Benko, and Andrew D. Wilson. 2015. Juggling the Effects of Latency: Motion Prediction Approaches to Reducing Latency in Dynamic Projector-Camera Systems. Microsoft Research Technical Report MSR-TR-2015--35.
[9]
Hideki Koike, and Hiroaki Yamaguchi. 2015. LumoSpheres: real-time tracking of flying objects and image projection for a volumetric display. In Proceedings of the 6th Augmented Human International Conference, pp. 93--96.
[10]
Till Kroeger, Radu Timofte, Dengxin Dai and Luc Van Gool. 2016. Fast Optical Flow using Dense Inverse Search. In Proceedings of the European Conference on Computer Vision (ECCV).
[11]
Jiajun Lu, and David Forsyth. 2015. Sparse Depth Super Resolution. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22452253.
[12]
Iason Oikonomidis, Nikolaos Kyriazis, and Antonis A. Argyros. 2011. Efficient model-based 3D tracking of hand articulations using Kinect. In BMVC, vol. 1, no. 2, p. 3.
[13]
Kohei Okumura, Hiromasa Oku, and Masatoshi Ishikawa. 2011. High-speed gaze controller for millisecond-order pan/tilt camera. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pp. 6186--6191.
[14]
Giorgos Papadakis, Katerina Mania, and Eftichios Koutroulis. 2011. A system to measure, control and minimize end-to-end head tracking latency in immersive simulations. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, pp. 581--584.
[15]
Mirko Schmidt, Klaus Zimmermann, and Bernd Jähne. 2011. High frame rate for 3D Time-of-Flight cameras by dynamic sensor calibration. In ICCP 2011, pp. 1--8.
[16]
Toby Sharp, Cem Keskin, Duncan Robertson, Jonathan Taylor, Jamie Shotton, David Kim, Christoph Rhemann et al. 2015. Accurate, robust, and flexible real-time hand tracking. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3633--3642.
[17]
Jan Stuhmer, Sebastian Nowozin, Andrew Fitzgibbon, Richard Szeliski, Travis Perry, Sunil Acharya, Daniel Cremers, and Jamie Shotton. 2015. Model-Based Tracking at 300Hz using Raw Time-of-Flight Observations. In ICCV, pp. 3577--3585.
[18]
Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology, pp. 159--168.
[19]
Haijun Xia, Ricardo Jota, Benjamin McCanny, Zhe Yu, Clifton Forlines, Karan Singh, and Daniel Wigdor. 2014. Zero-latency tapping: using hover information to predict touch locations and eliminate touchdown latency. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 205--214.

Cited By

View all
  • (2022)Reducing the Latency of Touch Tracking on Ad-hoc SurfacesProceedings of the ACM on Human-Computer Interaction10.1145/35677306:ISS(489-499)Online publication date: 14-Nov-2022
  • (2022)Adaptive Subsampling for ROI-Based Visual Tracking: Algorithms and FPGA ImplementationIEEE Access10.1109/ACCESS.2022.320075510(90507-90522)Online publication date: 2022
  • (2020)Learned Dynamic Guidance for Depth Image ReconstructionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2019.296167242:10(2437-2452)Online publication date: 1-Oct-2020
  • Show More Cited By

Index Terms

  1. Hybrid HFR Depth: Fusing Commodity Depth and Color Cameras to Achieve High Frame Rate, Low Latency Depth Camera Interactions

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
    May 2017
    7138 pages
    ISBN:9781450346559
    DOI:10.1145/3025453
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 May 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Honorable Mention

    Author Tags

    1. configurable
    2. depth camera
    3. frame rate
    4. kinect
    5. latency

    Qualifiers

    • Research-article

    Funding Sources

    • Sloan Foundation

    Conference

    CHI '17
    Sponsor:

    Acceptance Rates

    CHI '17 Paper Acceptance Rate 600 of 2,400 submissions, 25%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)23
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)Reducing the Latency of Touch Tracking on Ad-hoc SurfacesProceedings of the ACM on Human-Computer Interaction10.1145/35677306:ISS(489-499)Online publication date: 14-Nov-2022
    • (2022)Adaptive Subsampling for ROI-Based Visual Tracking: Algorithms and FPGA ImplementationIEEE Access10.1109/ACCESS.2022.320075510(90507-90522)Online publication date: 2022
    • (2020)Learned Dynamic Guidance for Depth Image ReconstructionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2019.296167242:10(2437-2452)Online publication date: 1-Oct-2020
    • (2018)The need 4 speed in real-time dense visual trackingACM Transactions on Graphics10.1145/3272127.327506237:6(1-14)Online publication date: 4-Dec-2018
    • (2018)The Volca ProjectExtended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3170427.3177767(1-6)Online publication date: 20-Apr-2018
    • (2017)An Instant See-Through Vision System Using a Wide Field-of-View Camera and a 3D-Lidar2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)10.1109/ISMAR-Adjunct.2017.99(344-347)Online publication date: Oct-2017

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media