Abstract
Non-invasive gaze estimation from only eye images captured by camera is a challenging problem due to various eye shapes, eye structures and image qualities. Recently, CNN network has been applied to directly regress eye image to gaze direction and obtains good performance. However, generic approaches are susceptible to bias and variance highly relating to different individuals. In this paper, we study the person-specific bias when applying generic methods on new person. And we introduce a novel appearance-based deep neural network integrating meta-learning to reduce the person-specific bias. Given only a few person-specific calibration images collected in normal calibration process, our model adapts quickly to test person and predicts more accurate gaze directions. Experiments on public MPIIGaze dataset and Eyediap dataset show our approach has achieved competitive accuracy to current state-of-the-art methods and are able to alleviate person-specific bias problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cheng, Y., Lu, F., Zhang, X.: Appearance-based gaze estimation via evaluation-guided asymmetric regression. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 100–115 (2018)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1126–1135. JMLR. org (2017)
Fischer, T., Jin Chang, H., Demiris, Y.: Rt-gene: real-time eye gaze estimation in natural environments. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 334–352 (2018)
Funes Mora, K.A., Monay, F., Odobez, J.M.: EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. ACM, March 2014. https://doi.org/10.1145/2578153.2578190
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Guasconi, S., Porta, M., Resta, C., Rottenbacher, C.: A low-cost implementation of an eye tracking system for driver’s gaze analysis. In: 2017 10th International Conference on Human System Interactions (HSI), pp. 264–269. IEEE (2017)
Guestrin, E.D., Eizenman, M.: General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 53(6), 1124–1133 (2006)
Hansen, D.W., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 478–500 (2009)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Krafka, K., et al.: Eye tracking for everyone. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2176–2184 (2016)
Lindén, E., Sjöstrand, J., Proutiere, A.: Learning to personalize in appearance-based gaze tracking. arXiv e-prints arXiv:1807.00664, July 2018
Liu, G., Yu, Y., Funes Mora, K.A., Odobez, J.M.: A differential approach for gaze estimation. arXiv e-prints arXiv:1904.09459, April 2019
Liu, G., Yu, Y., Funes Mora, K.A., Odobez, J.M.: A differential approach for gaze estimation with calibration. In: 29th British Machine Vision Conference (2018)
Lu, F., Sugano, Y., Okabe, T., Sato, Y.: Adaptive linear regression for appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2033–2046 (2014)
Mavely, A.G., Judith, J., Sahal, P., Kuruvilla, S.A.: Eye gaze tracking based driver monitoring system. In: 2017 IEEE International Conference on Circuits and Systems (ICCS), pp. 364–367. IEEE (2017)
Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 61–68. ACM (2009)
Nakano, T., et al.: Atypical gaze patterns in children and adults with autism spectrum disorders dissociated from developmental changes in gaze behaviour. Proc. R. Soc. B: Biol. Sci. 277(1696), 2935–2943 (2010)
Padmanaban, N., Konrad, R., Stramer, T., Cooper, E.A., Wetzstein, G.: Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017)
Patney, A., et al.: Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. (TOG) 35(6), 179 (2016)
Ranjan, R., De Mello, S., Kautz, J.: Light-weight head pose invariant gaze tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2156–2164 (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Sugano, Y., Fritz, M., Andreas Bulling, X., et al.: It’s written all over your face: full-face appearance-based gaze estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 51–60 (2017)
Sugano, Y., Matsushita, Y., Sato, Y.: Appearance-based gaze estimation using visual saliency. IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 329–341 (2012)
Sugano, Y., Matsushita, Y., Sato, Y.: Learning-by-synthesis for appearance-based 3D gaze estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014
Wang, Y., Yao, Q.: Few-shot learning: a survey. CoRR abs/1904.05046, http://arxiv.org/abs/1904.05046 (2019)
Yu, Y., Liu, G., Odobez, J.M.: Improving few-shot user-specific gaze adaptation via gaze redirection synthesis. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: Appearance-based gaze estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4511–4520 (2015)
Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: MPIIGaze: real-world dataset and deep appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 162–175 (2017)
Zhu, W., Deng, H.: Monocular free-head 3D gaze tracking with deep learning and geometry constraints. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3143–3152 (2017)
Acknowledgment
The paper was supported by Multidisciplinary Development Project of Shanghai Jiao Tong University under Grant YG2017MS33, Science and Technology Commission of Shanghai Municipality (STCSM) under Grant 12DZ1200102, and NSFC under Grant 61471234, 61771303.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zheng, C., Zhou, J., Sun, J., Zhao, L. (2020). Adaptive Person-Specific Appearance-Based Gaze Estimation. In: Zhai, G., Zhou, J., Yang, H., An, P., Yang, X. (eds) Digital TV and Wireless Multimedia Communication. IFTC 2019. Communications in Computer and Information Science, vol 1181. Springer, Singapore. https://doi.org/10.1007/978-981-15-3341-9_11
Download citation
DOI: https://doi.org/10.1007/978-981-15-3341-9_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-3340-2
Online ISBN: 978-981-15-3341-9
eBook Packages: Computer ScienceComputer Science (R0)