Skip to main content

Eye Centre Localisation with Convolutional Neural Networks in High- and Low-Resolution Images

  • Conference paper
  • First Online:
Computational Science and Its Applications – ICCSA 2022 (ICCSA 2022)

Abstract

Eye centre localisation is critical to eye tracking systems of various forms and with applications in variety of disciplines. An active eye tracking approach can achieve a high accuracy by leveraging active illumination to gain an enhanced contrast of the pupil to its neighbourhood area. While this approach is commonly adopted by commercial eye trackers, a dependency on IR lights can drastically increase system complexity and cost, and can limit its range of tracking, while reducing system usability. This paper investigates into a passive eye centre localisation approach, based on a single camera, utilising convolutional neural networks. A number of model architectures were experimented with, including the Inception-v3, NASNet, MobileNetV2, and EfficientNetV2. An accuracy of 99.34% with a 0.05 normalised error was achieved on the BioID dataset, which outperformed four other state-of-the-art methods in comparison. A means to further improve this performance on high-resolution data was proposed; and it was validated on a high-resolution dataset containing 12,381 one-megapixel images. When assessed in a typical eye tracking scenario, an average eye tracking error of 0.87% was reported, comparable to that of a much more expensive commercial eye tracker.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rayner, K.: Eye guidance in reading: fixation locations within words. Perception 8(1), 21–30 (1979)

    Article  Google Scholar 

  2. McConkie, G.W., Rayner, K.: The span of the effective stimulus during a fixation in reading. Percept. Psychophys. 17(6), 578–586 (1975)

    Article  Google Scholar 

  3. Tobii pro Homepage. https://www.tobiipro.com/. Accessed 02 Apr 2022

  4. Gazepoint Homepage. https://www.gazept.com/. Accessed 08 Mar 2022

  5. Duchowski, A.T.: A breadth-first survey of eye-tracking applications. Behav. Res. Methods Instrum. Comput. 34(4), 455–470 (2002)

    Article  Google Scholar 

  6. Krugman, D.M., Fox, R.J., Fletcher, J.E., Rojas, T.H.: Do adolescents attend to warnings in cigarette advertising? An eye-tracking approach. J. Advert. Res. 34(6), 39–53 (1994)

    Google Scholar 

  7. Hervet, G., Guérard, K., Tremblay, S., Chtourou, M.S.: Is banner blindness genuine? Eye tracking internet text advertising. Appl. Cogn. Psychol. 25(5), 708–716 (2011)

    Article  Google Scholar 

  8. Crawford, T.J., Devereaux, A., Higham, S., Kelly, C.: The disengagement of visual attention in Alzheimer’s disease: a longitudinal eye-tracking study. Front. Aging Neurosci. 7, 118 (2015)

    Article  Google Scholar 

  9. Kiili, K., Ketamo, H., Kickmeier-Rust, M.D.: Evaluating the usefulness of eye tracking in game-based learning. Int. J. Ser Games 1(2), 51–65 (2014)

    Google Scholar 

  10. Zhang, X., Liu, X., Yuan, S.M., Lin, S.F.: Eye tracking based control system for natural human-computer interaction. Comput. Intell. Neurosci. 2017, 1–9 (2017)

    Google Scholar 

  11. Mele, M.L., Federici, S.: Gaze and eye-tracking solutions for psychological research. Cogn. Process. 13(1), 261–265 (2012)

    Article  Google Scholar 

  12. Tobii pro nano. https://www.tobiipro.com/product-listing/nano/. Accessed 08 Mar 2022

  13. Gneo, M., Schmid, M., Conforto, S., D’Alessio, T.: A free geometry model-independent neural eye-gaze tracking system. J. Neuroeng. Rehabil. 9(1), 1–15 (2012)

    Article  Google Scholar 

  14. Binaee, K., Sinnott, C., Capurro, K.J., MacNeilage, P., Lescroart, M.D.: Pupil Tracking under direct sunlight. In: ACM Symposium on Eye Tracking Research and Applications, pp. 1–4. Association for Computing Machinery, New York (2021)

    Google Scholar 

  15. Timm, F., Barth, E.: Accurate eye centre localisation by means of gradients. Visapp 11, 125–130 (2011)

    Google Scholar 

  16. Villanueva, A., Ponz, V., Sesma-Sanchez, L., Ariz, M., Porta, S., Cabeza, R.: Hybrid method based on topography for robust detection of iris center and eye corners. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 9(4), 1–20 (2013)

    Google Scholar 

  17. George, A., Routray, A.: Fast and accurate algorithm for eye localisation for gaze tracking in low-resolution images. IET Comput. Vision 10(7), 660–669 (2016)

    Article  Google Scholar 

  18. Zhang, W., Smith, M.L., Smith, L.N., Farooq, A.: Gender and gaze gesture recognition for human-computer interaction. Comput. Vis. Image Underst. 149, 32–50 (2016)

    Article  Google Scholar 

  19. Ahmad, N., Yadav, K.S., Ahmed, M., Laskar, R.H., Hossain, A.: An integrated approach for eye centre localization using deep networks and rectangular-intensity-gradient technique. J. King Saud Univ.-Comput. Inf. Sci. (2022)

    Google Scholar 

  20. Khan, W., Hussain, A., Kuru, K., Al-Askar, H.: Pupil localisation and eye centre estimation using machine learning and computer vision. Sensors 20(13), 3785 (2020)

    Article  Google Scholar 

  21. King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)

    Google Scholar 

  22. Zhang, W., Smith, M.: Eye centre localisation with convolutional neural network based regression. In 2019 IEEE 4th International Conference on Image, Vision and Computing, pp. 88–94. IEEE (2019)

    Google Scholar 

  23. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. IEEE (2016)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE (2016)

    Google Scholar 

  25. The BioID. Face database (2014). https://www.bioid.com/About/BioID-Face-Database. Accessed 05 Feb 2019

  26. Jesorsky, O., Kirchberg, K.J., Frischholz, R.W.: Robust face detection using the Hausdorff distance. In: Bigun, J., Smeraldi, F. (eds.) Audio- and Video-Based Biometric Person Authentication. AVBPA 2001. Lecture Notes in Computer Science, vol. 2091. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45344-X_14

  27. Crutcher, M.D., Calhoun-Haney, R., Manzanares, C.M., Lah, J.J., Levey, A.I., Zola, S.M.: Eye tracking during a visual paired comparison task as a predictor of early dementia. Am. J. Alzheimer’s Dis. Other Dement.® 24(3), 258–266 (2009)

    Google Scholar 

  28. Oyama, A., et al.: Novel method for rapid assessment of cognitive impairment using high-performance eye-tracking technology. Sci. Rep. 9(1), 1–9 (2019)

    Article  Google Scholar 

  29. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710. IEEE (2018)

    Google Scholar 

  30. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520. IEEE (2018)

    Google Scholar 

  31. Tan, M., Le, Q.: Efficientnetv2: smaller models and faster training. In: International Conference on Machine Learning, pp. 10096–10106. PMLR (2021)

    Google Scholar 

  32. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenhao Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, W., Smith, M.L. (2022). Eye Centre Localisation with Convolutional Neural Networks in High- and Low-Resolution Images. In: Gervasi, O., Murgante, B., Hendrix, E.M.T., Taniar, D., Apduhan, B.O. (eds) Computational Science and Its Applications – ICCSA 2022. ICCSA 2022. Lecture Notes in Computer Science, vol 13375. Springer, Cham. https://doi.org/10.1007/978-3-031-10522-7_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10522-7_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10521-0

  • Online ISBN: 978-3-031-10522-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics