skip to main content
10.1145/3132787.3139201acmconferencesArticle/Chapter ViewAbstractPublication Pagessiggraph-asiaConference Proceedingsconference-collections
abstract

Automated enabling of head mounted display using gaze-depth estimation

Authors Info & Claims
Published:27 November 2017Publication History

ABSTRACT

Recently, global companies have released OST-HMDs (Optical See-through Head Mounted Displays) for Augmented Reality. The main feature of these HMDs is that you can see virtual objects while seeing real space. However, if you do not want to see a virtual object and you want to focus on a real object, this functionality is inconvenient. In this paper, we propose a method to turn on / off the screen of HMD according to user's gaze when using an augmented reality HMD. The proposed method uses the eye-tracker attached to the mobile HMD to determine the line of sight along the distance. We put this data into a neural network to create a learning model. After the learning is completed, the gaze data is input in real time to obtain the gaze predicted distance. Through various experiments, the possibilities and limits of machine learning algorithms are grasped and suggestions for improvement are suggested.

References

  1. Brion Benninger. 2015. Google Glass, ultrasound and palpation: The anatomy teacher of the future? Clinical Anatomy 28, 2 (2015), 152--155.Google ScholarGoogle ScholarCross RefCross Ref
  2. Andrew T. Duchowski, Brandon Pelfrey, Donald H. House, and Rui Wang. 2011. Measuring gaze depth with an eye tracker during stereoscopic display. In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization - APGV '11, Vol. 1. ACM Press, New York, New York, USA, 15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Kai Essig, Marc Pomplun, and Helge Ritter. 2006. A neural network for 3D gaze recording with binocular eye trackers. International Journal of Parallel, Emergent and Distributed Systems 21, February 2015 (2006), 79--95.Google ScholarGoogle ScholarCross RefCross Ref
  4. Rod Furlan. 2016. The future of augmented reality: Hololens - Microsoft's AR headset shines despite rough edges {Resources-Tools and Toys}. IEEE Spectrum 53, 6 (2016), 21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Julier, M. Lanzagorta, Y. Baillot, L. Rosenblum, S. Feiner, T. Höllerer, and S. Sestito. 2000. Information filtering for mobile augmented reality. Proceedings - IEEE and ACM International Symposium on Augmented Reality, ISAR 2000 (2000), 3--11.Google ScholarGoogle Scholar
  6. Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (2014), 1151--1160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Y M Kwon and J K Shul. 2006. Experimental researches on gaze-based 3D interaction to stereo image display. Technologies for E-Learning and Digital Entertainment, Proceedings 3942 (2006), 1112--1120. https://doi.org/ Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ji Woo Lee, Chul Woo Cho, Kwang Yong Shin, Eui Chul Lee, and Kang Ryoung Park. 2012. 3D gaze tracking method using Purkinje images on eye optical model and pupil. Optics and Lasers in Engineering 50, 5 (2012), 736--751.Google ScholarGoogle ScholarCross RefCross Ref
  9. Youngho Lee, Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst. 2016. A Remote Collaboration System with Empathy Glasses. In Adjunct Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2016. 342--343.Google ScholarGoogle ScholarCross RefCross Ref
  10. Youngho Lee, Choonsung Shin, Alexander Plopski, Yuta Itoh, Arindam Dey, Gun Lee, Seungwon Kim, and Mark Billinghurst. 2017. Estimating Gaze Depth Using Multi-Layer Perceptron. In International Symposium on Ubiquitous Virtual Reality (ISUVR). 26--29.Google ScholarGoogle Scholar
  11. Maria Laura Mele and Stefano Federici. 2012. Gaze and eye-tracking solutions for psychological research. Cognitive Processing 13, 1 SUPPL (2012).Google ScholarGoogle Scholar
  12. Parmy Olson. 2013. Epson Smart Glasses Browse YouTube With A Nod And Tilt Of The Head. Forbes.com (2013), 9.Google ScholarGoogle Scholar
  13. Fabian Pedregosa, GaÃńl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ÃL'douard Duchesnay. 2012. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2012), 2825--2830. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Takumi Toyama, Jason Orlosky, Daniel Sonntag, and Kiyoshi Kiyokawa. 2014. A natural interface for multi-focal plane head mounted displays using 3D gaze. Proceedings of 12th International Working Conference on Advanced Visual Interfaces (AVI 2014) 2 (2014), 25--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Kang Wang, Shen Wang, and Qiang Ji. 2016. Deep eye fixation map learning for calibration-free eye gaze tracking. Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications - ETRA '16 (2016), 47--55. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Alex Wawro. 2015. Gamasutra - Windows to the soul: Fove makes a case for eye-tracking VR games. (2015). https://goo.gl/8Yn3VSGoogle ScholarGoogle Scholar

Index Terms

  1. Automated enabling of head mounted display using gaze-depth estimation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SA '17: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications
      November 2017
      116 pages
      ISBN:9781450354103
      DOI:10.1145/3132787

      Copyright © 2017 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 November 2017

      Check for updates

      Qualifiers

      • abstract

      Acceptance Rates

      Overall Acceptance Rate178of869submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader