Skip to main content

Adaptive Person-Specific Appearance-Based Gaze Estimation

  • Conference paper
  • First Online:
Book cover Digital TV and Wireless Multimedia Communication (IFTC 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1181))

  • 636 Accesses

Abstract

Non-invasive gaze estimation from only eye images captured by camera is a challenging problem due to various eye shapes, eye structures and image qualities. Recently, CNN network has been applied to directly regress eye image to gaze direction and obtains good performance. However, generic approaches are susceptible to bias and variance highly relating to different individuals. In this paper, we study the person-specific bias when applying generic methods on new person. And we introduce a novel appearance-based deep neural network integrating meta-learning to reduce the person-specific bias. Given only a few person-specific calibration images collected in normal calibration process, our model adapts quickly to test person and predicts more accurate gaze directions. Experiments on public MPIIGaze dataset and Eyediap dataset show our approach has achieved competitive accuracy to current state-of-the-art methods and are able to alleviate person-specific bias problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cheng, Y., Lu, F., Zhang, X.: Appearance-based gaze estimation via evaluation-guided asymmetric regression. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 100–115 (2018)

    Chapter  Google Scholar 

  2. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1126–1135. JMLR. org (2017)

    Google Scholar 

  3. Fischer, T., Jin Chang, H., Demiris, Y.: Rt-gene: real-time eye gaze estimation in natural environments. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 334–352 (2018)

    Chapter  Google Scholar 

  4. Funes Mora, K.A., Monay, F., Odobez, J.M.: EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras. In: Proceedings of the ACM Symposium on Eye Tracking Research and Applications. ACM, March 2014. https://doi.org/10.1145/2578153.2578190

  5. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

    Google Scholar 

  6. Guasconi, S., Porta, M., Resta, C., Rottenbacher, C.: A low-cost implementation of an eye tracking system for driver’s gaze analysis. In: 2017 10th International Conference on Human System Interactions (HSI), pp. 264–269. IEEE (2017)

    Google Scholar 

  7. Guestrin, E.D., Eizenman, M.: General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 53(6), 1124–1133 (2006)

    Article  Google Scholar 

  8. Hansen, D.W., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 478–500 (2009)

    Article  Google Scholar 

  9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  10. Krafka, K., et al.: Eye tracking for everyone. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2176–2184 (2016)

    Google Scholar 

  11. Lindén, E., Sjöstrand, J., Proutiere, A.: Learning to personalize in appearance-based gaze tracking. arXiv e-prints arXiv:1807.00664, July 2018

  12. Liu, G., Yu, Y., Funes Mora, K.A., Odobez, J.M.: A differential approach for gaze estimation. arXiv e-prints arXiv:1904.09459, April 2019

  13. Liu, G., Yu, Y., Funes Mora, K.A., Odobez, J.M.: A differential approach for gaze estimation with calibration. In: 29th British Machine Vision Conference (2018)

    Google Scholar 

  14. Lu, F., Sugano, Y., Okabe, T., Sato, Y.: Adaptive linear regression for appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2033–2046 (2014)

    Article  Google Scholar 

  15. Mavely, A.G., Judith, J., Sahal, P., Kuruvilla, S.A.: Eye gaze tracking based driver monitoring system. In: 2017 IEEE International Conference on Circuits and Systems (ICCS), pp. 364–367. IEEE (2017)

    Google Scholar 

  16. Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 61–68. ACM (2009)

    Google Scholar 

  17. Nakano, T., et al.: Atypical gaze patterns in children and adults with autism spectrum disorders dissociated from developmental changes in gaze behaviour. Proc. R. Soc. B: Biol. Sci. 277(1696), 2935–2943 (2010)

    Article  Google Scholar 

  18. Padmanaban, N., Konrad, R., Stramer, T., Cooper, E.A., Wetzstein, G.: Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proc. Natl. Acad. Sci. 114(9), 2183–2188 (2017)

    Article  Google Scholar 

  19. Patney, A., et al.: Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. (TOG) 35(6), 179 (2016)

    Article  MathSciNet  Google Scholar 

  20. Ranjan, R., De Mello, S., Kautz, J.: Light-weight head pose invariant gaze tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2156–2164 (2018)

    Google Scholar 

  21. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  22. Sugano, Y., Fritz, M., Andreas Bulling, X., et al.: It’s written all over your face: full-face appearance-based gaze estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 51–60 (2017)

    Google Scholar 

  23. Sugano, Y., Matsushita, Y., Sato, Y.: Appearance-based gaze estimation using visual saliency. IEEE Trans. Pattern Anal. Mach. Intell. 35(2), 329–341 (2012)

    Article  Google Scholar 

  24. Sugano, Y., Matsushita, Y., Sato, Y.: Learning-by-synthesis for appearance-based 3D gaze estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014

    Google Scholar 

  25. Wang, Y., Yao, Q.: Few-shot learning: a survey. CoRR abs/1904.05046, http://arxiv.org/abs/1904.05046 (2019)

  26. Yu, Y., Liu, G., Odobez, J.M.: Improving few-shot user-specific gaze adaptation via gaze redirection synthesis. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

    Google Scholar 

  27. Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: Appearance-based gaze estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4511–4520 (2015)

    Google Scholar 

  28. Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: MPIIGaze: real-world dataset and deep appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 162–175 (2017)

    Article  Google Scholar 

  29. Zhu, W., Deng, H.: Monocular free-head 3D gaze tracking with deep learning and geometry constraints. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3143–3152 (2017)

    Google Scholar 

Download references

Acknowledgment

The paper was supported by Multidisciplinary Development Project of Shanghai Jiao Tong University under Grant YG2017MS33, Science and Technology Commission of Shanghai Municipality (STCSM) under Grant 12DZ1200102, and NSFC under Grant 61471234, 61771303.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, C., Zhou, J., Sun, J., Zhao, L. (2020). Adaptive Person-Specific Appearance-Based Gaze Estimation. In: Zhai, G., Zhou, J., Yang, H., An, P., Yang, X. (eds) Digital TV and Wireless Multimedia Communication. IFTC 2019. Communications in Computer and Information Science, vol 1181. Springer, Singapore. https://doi.org/10.1007/978-981-15-3341-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-3341-9_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-3340-2

  • Online ISBN: 978-981-15-3341-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics