ABSTRACT
Recently, global companies have released OST-HMDs (Optical See-through Head Mounted Displays) for Augmented Reality. The main feature of these HMDs is that you can see virtual objects while seeing real space. However, if you do not want to see a virtual object and you want to focus on a real object, this functionality is inconvenient. In this paper, we propose a method to turn on / off the screen of HMD according to user's gaze when using an augmented reality HMD. The proposed method uses the eye-tracker attached to the mobile HMD to determine the line of sight along the distance. We put this data into a neural network to create a learning model. After the learning is completed, the gaze data is input in real time to obtain the gaze predicted distance. Through various experiments, the possibilities and limits of machine learning algorithms are grasped and suggestions for improvement are suggested.
- Brion Benninger. 2015. Google Glass, ultrasound and palpation: The anatomy teacher of the future? Clinical Anatomy 28, 2 (2015), 152--155.Google ScholarCross Ref
- Andrew T. Duchowski, Brandon Pelfrey, Donald H. House, and Rui Wang. 2011. Measuring gaze depth with an eye tracker during stereoscopic display. In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization - APGV '11, Vol. 1. ACM Press, New York, New York, USA, 15. Google ScholarDigital Library
- Kai Essig, Marc Pomplun, and Helge Ritter. 2006. A neural network for 3D gaze recording with binocular eye trackers. International Journal of Parallel, Emergent and Distributed Systems 21, February 2015 (2006), 79--95.Google ScholarCross Ref
- Rod Furlan. 2016. The future of augmented reality: Hololens - Microsoft's AR headset shines despite rough edges {Resources-Tools and Toys}. IEEE Spectrum 53, 6 (2016), 21.Google ScholarDigital Library
- S. Julier, M. Lanzagorta, Y. Baillot, L. Rosenblum, S. Feiner, T. Höllerer, and S. Sestito. 2000. Information filtering for mobile augmented reality. Proceedings - IEEE and ACM International Symposium on Augmented Reality, ISAR 2000 (2000), 3--11.Google Scholar
- Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (2014), 1151--1160. Google ScholarDigital Library
- Y M Kwon and J K Shul. 2006. Experimental researches on gaze-based 3D interaction to stereo image display. Technologies for E-Learning and Digital Entertainment, Proceedings 3942 (2006), 1112--1120. https://doi.org/ Google ScholarDigital Library
- Ji Woo Lee, Chul Woo Cho, Kwang Yong Shin, Eui Chul Lee, and Kang Ryoung Park. 2012. 3D gaze tracking method using Purkinje images on eye optical model and pupil. Optics and Lasers in Engineering 50, 5 (2012), 736--751.Google ScholarCross Ref
- Youngho Lee, Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst. 2016. A Remote Collaboration System with Empathy Glasses. In Adjunct Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2016. 342--343.Google ScholarCross Ref
- Youngho Lee, Choonsung Shin, Alexander Plopski, Yuta Itoh, Arindam Dey, Gun Lee, Seungwon Kim, and Mark Billinghurst. 2017. Estimating Gaze Depth Using Multi-Layer Perceptron. In International Symposium on Ubiquitous Virtual Reality (ISUVR). 26--29.Google Scholar
- Maria Laura Mele and Stefano Federici. 2012. Gaze and eye-tracking solutions for psychological research. Cognitive Processing 13, 1 SUPPL (2012).Google Scholar
- Parmy Olson. 2013. Epson Smart Glasses Browse YouTube With A Nod And Tilt Of The Head. Forbes.com (2013), 9.Google Scholar
- Fabian Pedregosa, GaÃńl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ÃL'douard Duchesnay. 2012. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2012), 2825--2830. Google ScholarDigital Library
- Takumi Toyama, Jason Orlosky, Daniel Sonntag, and Kiyoshi Kiyokawa. 2014. A natural interface for multi-focal plane head mounted displays using 3D gaze. Proceedings of 12th International Working Conference on Advanced Visual Interfaces (AVI 2014) 2 (2014), 25--32. Google ScholarDigital Library
- Kang Wang, Shen Wang, and Qiang Ji. 2016. Deep eye fixation map learning for calibration-free eye gaze tracking. Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications - ETRA '16 (2016), 47--55. Google ScholarDigital Library
- Alex Wawro. 2015. Gamasutra - Windows to the soul: Fove makes a case for eye-tracking VR games. (2015). https://goo.gl/8Yn3VSGoogle Scholar
Index Terms
- Automated enabling of head mounted display using gaze-depth estimation
Recommendations
Wearable head-mounted 3D tactile display application scenarios
MobileHCI '16: Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services AdjunctCurrent generation virtual reality (VR) and augmented reality (AR) head-mounted displays (HMDs) usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. In a previous work, we presented HapticHead, a ...
Vibrotactile stimulation of the head enables faster gaze gestures
Gaze gestures are a promising input technology for wearable devices especially in the smart glasses form factor because gaze gesturing is unobtrusive and leaves the hands free for other tasks. We were interested in how gaze gestures can be enhanced with ...
Head-mounted display with mid-air tactile feedback
VRST '15: Proceedings of the 21st ACM Symposium on Virtual Reality Software and TechnologyVirtual and physical worlds are merging. Currently users of head-mounted displays cannot have unobtrusive tactile feedback while touching virtual objects. We present a mid-air tactile feedback system for head-mounted displays. Our prototype uses the ...
Comments