Skip to main content
Log in

An integrated head pose and eye gaze tracking approach to non-intrusive visual attention measurement for wide FOV simulators

  • SI: Manufacturing and Construction
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

Eye gaze tracking is very useful for quantitatively measuring visual attention in virtual environments. However, most eye trackers have a limited tracking range, e.g., ±35° in the horizontal direction. This paper proposed a method to combine head pose tracking and eye gaze tracking together to achieve a large range of tracking in virtual driving simulation environments. Multiple parallel multilayer perceptrons were used to reconstruct the relationship between head images and head poses. Head images were represented with the coefficients extracted from Principal Component Analysis. Eye gaze tracking provides precise results on the front view, while head pose tracking is more suitable for tracking areas of interest than for tracking points of interest on the side view.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Abe Y, Hagiwara M (2001) Analysis and recognition of a human head’s movement from an image sequence. Syst and Comput 32(5):36–45

    Article  Google Scholar 

  • Amokrane K, Lourdeaux D, Burkhardt J (2008) HERA: learner tracking in a virtual environment. Int J Virtual Real 7(3):23–30 Sept

    Google Scholar 

  • Beverina F, Palmas G, Anisetti M, Bellandi V (2006) Tracking based face identification: a way to manage occlusions, and illumination, posture and expression changes. 2nd IET international conference on intelligent environments, IE, pp 161–166

  • Cai H, Lin Y, and Mourant RR (2007) Evaluation of driver visual behavior and road signs in virtual environment. In: Proceeding of HFES 51st annual meeting, Baltimore, vol 5. USA, pp 1645–1649

  • Einhauser W, Schumann F, Bardins S, Bartl K, Boning G, Schneider E, Konig P (2007) Human eye-head co-ordination in natural exploration. Netw Comput Neural Syst 18(3):267–297

    Article  Google Scholar 

  • Ekman P, Friesen W, Hager J (2002) Facial action coding system. 2nd ed. salt lake city. ISBN 0-931835-01-1

  • Hornik KM, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366

    Article  Google Scholar 

  • Kuratate T, Masuda S, Vatikiotis-Bateson E (2001) What perceptible information can be implemented in talking head animations? In: Proceedings of the IEEE international workshop on robot and human interactive communication (RO-MAN2001), Bordaux and Paris, France, pp 430–435

  • Lam SY, Tong CS (2002) Conformal snake algorithm for contour detection. Electron Lett 38(10):452–453

    Article  Google Scholar 

  • Land MF (1992) Predictable eye-head coordination during driving. Nature 359(6393):318–320

    Article  Google Scholar 

  • Lee YC, Lee JD, Boyle L (2007) Visual attention in driving: the effects of cognitive load and visual disruption. Hum Factors 49(4):721–733

    Article  Google Scholar 

  • Lin Y, Zhang WJ, Watson LG (2003) Using eye movement parameters for evaluating human–machine interface frameworks under normal control operation and fault detection situation. Int J Hum-Comput Stud 59(6):837–873

    Article  Google Scholar 

  • Ma L, Yu Y, Zhang Y (2000) Realization of the virtual driving system rules based on the improved petri-net. Int J Virtual Real 4(4):18

    Google Scholar 

  • Mourant RR, Ahmad N, Jaeger BK, Lin Y (2007) Optic flow and geometric field of view in a driving simulator display. Displays 28:145–149

    Article  Google Scholar 

  • Oommen BS, Smith RM, Stah JS (2004) The influence of future gaze orientation upon eye-head coupling during saccades. Exp Brain Res 155(1):9–18

    Article  Google Scholar 

  • Sakalli M, Lam KM, Yan H (2006) A faster converging snake algorithm to locate object boundaries. IEEE Trans Image Process 15(5):1182–1191

    Article  Google Scholar 

  • Sherrah J, Gong S (2001) Fusion of perceptual cues for robust tracking of head pose and position. Pattern Recognit 34(8):1565–1572

    Article  MATH  Google Scholar 

  • Shlens J (2005) A tutorial on principal component analysis. University of California. http://www.snl.salk.edu/~shlens/pub/notes/pca.pdf

  • Xu F, Xu B, Ding H, Qian W, Zhang D, Wang W, Ge F (2000) Virtual automobile driver training simulator. Int J Virtual Real 4(4):11

    Google Scholar 

  • Zhang H, Smith M, Witt GJ (2006) Identification of real-time diagnostic measures of visual distraction with an automatic eye-tracking system. Hum Factors 48(4):805–821

    Article  Google Scholar 

Download references

Acknowledgments

The research has been supported by the National Science Foundation (NSF) through a research grant awarded to the corresponding author (Grant # 0954579).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingzi Lin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cai, H., Lin, Y. An integrated head pose and eye gaze tracking approach to non-intrusive visual attention measurement for wide FOV simulators. Virtual Reality 16, 25–32 (2012). https://doi.org/10.1007/s10055-010-0171-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-010-0171-9

Keywords

Navigation