Abstract
We propose a 3D gaze-tracking method that combines accurate 3D eye- and facial-gaze vectors estimated from a Kinect v2 high-definition face model. Using accurate 3D facial and ocular feature positions, gaze positions can be calculated more accurately than with previous methods. Considering the image resolution of the face and eye regions, two gaze vectors are combined as a weighted sum, allocating more weight to facial-gaze vectors. Hence, the facial orientation mainly determines the gaze position, and eye-gaze vectors then perform minor manipulations. The 3D facial-gaze vector is first defined, and the 3D rotational center of the eyeball is then estimated; together, these define the 3D eye-gaze vector. Finally, the intersection point between the 3D gaze vector and the physical display plane is calculated as the gaze position. Experimental results show that the average gaze estimation root-mean-square error was approximately 23 pixels from the desired position at a resolution of \(1920\times 1080\).









Similar content being viewed by others
References
Yoo DH, Chung MJ (2005) A novel non-intrusive eye gaze estimation using cross-ratio under large head motion. Comput Vis Image Underst 98(1):25–51
Wang JG, Sung E (2002) Study on eye gaze estimation. IEEE Trans Syst Man Cybern B Cybern 32(3):332–350
Chorianopoulos K (2013) Collective intelligence within web video. Hum Centric Comput Inf Sci 3(1):1–16
Shih SW, Liu J (2004) A novel approach to 3-D gaze tracking using stereo cameras. IEEE Trans Syst Man Cybern Part B Cybern 34(1):234–245
Murphy-Chutorian E, Doshi A, Trivedi MM (2007) Head pose estimation for driver assistance systems: a robust algorithm and experimental evaluation. In: Proceedings of the IEEE ITSC, pp 709–714
Kim H, Lee S-H, Sohn M-K, Kim D-J (2014) Illumination invariant head pose estimation using random forests classifier and binary pattern run length matrix. Hum Centric Comput Inf Sci 4(1):1–12
Bostanci E, Kanwal N, Clark AF (2015) Augmented reality applications for cultural heritage using Kinect. Hum Centric Comput Inf Sci 5(1):1–18
Shin SW, Liu J (2004) A novel approach to 3-D gaze tracking using stereo cameras. Syst Man Cybern Part B Cybern IEEE Trans 34:234–245
Yang J, Stiefelhagen R, Meier U, Waibel A (1998) Real-time face and facial feature tracking and applications. In: AVSP’98 international conference on auditory-visual speech processing, pp 79–84
Lee JW, Cho CW, Shin KY, Lee EC, Park KR (2012) 3D gaze tracking method using Purkinje images on eye optical model and pupil. Opt Lasers Eng 50(5):736–751
Kinect for Windows, version 2 specification. http://www.slideshare.net/MatteoValoriani/programming-with-kinect-v2/ihfBIY56Is8. Accessed 4 Mar 2016
Kim H, Kim Y, Lee EC (2014) Method for User Interface of Large Displays Using Arm Pointing and Finger Counting Gesture Recognition. Sci World J 2014:1–9, Article ID: 683045
Kim H et al (2014) Pointing gesture interface for large display environments based on the Kinect skeleton model. Lect Notes Electr Eng 309:509–514
Lee C (1997) Three dimensional position of the eye. Psychol Issues 4:255–278
Anton H, Rorres C (2005) Elementary linear algebra. Wiley, NewYork, pp 333–335
Chapra SC et al (1989) Numerical methods for engineer. McGraw-Hill, NewYork
Lee EC, Park KR (2007) A study on eye gaze estimation method based on cornea model of human eye. In: Lecture notes in computer science (MIRAGE 2007), INRIA Rocquencourt, France, March, 28–30 (accepted for publication)
Acknowledgments
This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2016-H8501-16-1014), (IITP-2016-R0992-16-1014) supervised by the IITP (Institute for Information & communications Technology Promotion).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kim, B.C., Ko, D., Jang, U. et al. 3D Gaze tracking by combining eye- and facial-gaze vectors. J Supercomput 73, 3038–3052 (2017). https://doi.org/10.1007/s11227-016-1817-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-016-1817-5