Skip to main content
Log in

The Role of Focus in Advanced Visual Interfaces

  • Technical Contribution
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

Developing more natural and intelligent interaction methods for head mounted displays (HMDs) has been an important goal in augmented reality for many years. Recently, small form factor eye tracking interfaces and wearable displays have become small enough to be used simultaneously and for extended periods of time. In this paper, we describe the combination of monocular HMDs and an eye tracking interface and show how they can be used to automatically reduce interaction requirements for displays with both single and multiple focal planes. We then present the results of preliminary and primary experiments which test the accuracy of eye tracking for a number of different displays such as Google Glass and Brother’s AiRScouter. Results show that our focal plane classification algorithm works with over 98 % accuracy for classifying the correct distance of virtual objects in our multi-focal plane display prototype and with over 90 % accuracy for classifying physical and virtual objects in commercial monocular displays. Additionally, we describe methodology for integrating our system into augmented reality applications and attentive interfaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Abbott WW, Faisal AA (2012) Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain–machine interfaces. J Neural Eng 9(4):046016

    Article  Google Scholar 

  2. Akeley K, Watt SJ, Girshick AR, Banks MS (2004) A stereo display prototype with multiple focal distances. ACM Trans Gr (TOG) 23(3):804–813 (ACM)

  3. Ames SL, McBrien NA (2004) Development of a miniaturized system for monitoring vergence during viewing of stereoscopic imagery using a head-mounted display. In: Proceedings of SPIE, vol 5291, pp 25–35

  4. Baumgarten J, Schuchert T, Voth S, Wartenberg P, Richter B, Vogel U (2012) Aspects of a head-mounted eye-tracker based on a bidirectional OLED microdisplay. J Inf Disp 13(2):67–71

    Article  Google Scholar 

  5. Bulling A, Gellersen H (2010) Toward mobile eye-based human–computer interaction. Pervasive Comput IEEE 9(4):8–12

    Article  Google Scholar 

  6. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):27

    Google Scholar 

  7. Cho I, Dou W, Wartell Z, Ribarsky W, Wang X (2012) Evaluating depth perception of volumetric data in semi-immersive VR. In: Proceedings of the international working conference on advanced visual interfaces, pp 266–269. ACM

  8. Cormack RH (1984) Stereoscopic depth perception at far viewing distances. Percept Psychophys 35(5):423–428 (Chicago)

  9. Damala A, Stojanovic N, Schuchert T, Moragues J, Cabrera A, Gilleade K (2012) Adaptive augmented reality for cultural heritage: ARtSENSE project. In: Progress in cultural heritage preservation. Springer, Berlin, pp 746–755

  10. Essig K, Pomplun M, Ritter H (2006) A neural network for 3D gaze recording with binocular eye trackers. Int J Parallel Emerg Distrib Syst 21(2):79–95

    Article  MathSciNet  MATH  Google Scholar 

  11. Ferguson G, Allen JF (2011) A cognitive model for collaborative agents. In: AAAI fall symposium: advances in cognitive systems

  12. Hu X, Hua H (2014) Design and assessment of a depth-fused multi-focal-plane display prototype. J Disp Technol 10(4):308–316

    Article  Google Scholar 

  13. Hua H, Gao C, Brown LD, Ahuja N, Rolland JP (2002) A testbed for precise registration, natural occlusion and interaction in an augmented environment using a head-mounted projective display (HMPD). In: Virtual reality, 2002. Proceedings. IEEE, pp 81–89

  14. Ki J, Kwon YM, Sohn K (2007) 3D gaze tracking and analysis for attentive human–computer interaction. In: Frontiers in the convergence of bioscience and information technologies, 2007 (FBIT’07), pp 617–621. IEEE

  15. Kim SK, Kim DW, Kwon YM, Son JY (2008) Evaluation of the monocular depth cue in 3D displays. Opt Express 16(26):21415–21422

    Article  Google Scholar 

  16. Kim SK, Kim EH, Kim DW (2011) Full parallax multifocus three-dimensional display using a slanted light source array. Opt Eng 50(11):114001

    Article  Google Scholar 

  17. Kwon YM, Jeon KW, Ki J, Shahab QM, Jo S, Kim SK (2006) 3d gaze estimation and interaction to stereo display. Int J Virtual Real IJVR 5(3):41–45

    Google Scholar 

  18. Land MF, Hayhoe M (2001) In what ways do eye movements contribute to everyday activities? Vis Res 41(25):3559–3565

    Article  Google Scholar 

  19. Lang C, Nguyen TV, Katti H, Yadati K, Kankanhalli M, Yan S (2012) Depth matters: influence of depth cues on visual saliency. In: Computer vision—ECCV 2012. Springer, Berlin, pp 101–115

  20. Lee JY, Lee SH, Park HM, Lee SK, Choi JS, Kwon JS (2010) Design and implementation of a wearable AR annotation system using gaze interaction. In: 2010 digest of technical papers international conference on consumer electronics (ICCE). IEEE, pp 185–186

  21. Liu S, Hua H, Cheng D (2010) A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans Vis Comput Gr 16(3):381–393

    Article  Google Scholar 

  22. Liu S, Hua H (2010) A systematic method for designing depth-fused multi-focal plane three-dimensional displays. Opt Express 18(11):11562–11573

    Article  Google Scholar 

  23. Maimone A, Fuchs H (2013) Computational augmented reality eyeglasses. In: IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 29–38

  24. Orlosky J, Kiyokawa K, Takemura H (2014) Managing mobile text in head mounted displays: studies on visual preference and text placement. ACM SIGMOBILE Mob Comput Commun Rev 18(2):20–31

    Article  Google Scholar 

  25. Pfeiffer T, Latoschik ME, Wachsmuth I (2008) Evaluation of binocular eye trackers and algorithms for 3D gaze interaction in virtual reality environments. JVRB-J Virtual Real Broadcast 5(16):1660

    Google Scholar 

  26. Prasov Z, Chai JY (2008) What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In: Proceedings of the 13th international conference on intelligent user interfaces. ACM, pp 20–29

  27. Qvarfordt P, Zhai S (2005) Conversing with the user based on eye-gaze patterns. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, pp 221–230

  28. Schowengerdt BT, Seibel EJ (2004) True three-dimensional displays that allow viewers to dynamically shift accommodation, bringing objects displayed at different viewing distances into and out of focus. CyberPsychol Behav 7(6):610–620

    Article  Google Scholar 

  29. Sonntag D, Zillner S, Schulz C, Weber M, Toyama T (2013) Towards medical cyber-physical systems: multimodal augmented reality for doctors and knowledge discovery about patients. In: Design, user experience, and usability. User experience in novel technological environments. Springer, Berlin, pp 401–410

  30. Sullivan A (2003) 58.3: a solid-state multi-planar volumetric display. In: SID symposium digest of technical papers, vol 34, issue 1. Blackwell Publishing Ltd, Oxford, pp 1531–1533

  31. Swan JE, Livingston MA, Smallman HS, Brown D, Baillot Y, Gabbard JL, Hix D (2006) A perceptual matching technique for depth judgments in optical, see-through augmented reality. In: Virtual reality conference, 2006. IEEE, pp 19–26

  32. Toyama T, Dengel A, Suzuki W, Kise K (2013) Wearable reading assist system: augmented reality document combining document retrieval and eye tracking. In: 12th international conference on document analysis and recognition (ICDAR). IEEE, pp 30–34

  33. Toyama T, Sonntag D, Orlosky J, Kiyokawa K (2014) A natural interface for multi-focal plane head mounted displays using 3D gaze. In: Proceedings of the 2014 international working conference on advanced visual interfaces. ACM, pp 25–32

  34. Tsukamoto M, Terada T (2006) Step toward establishing safety guidelines of wearable head-mounted displays (HMDs). In: Proc. of the 11th international conference on industrial engineering theory, applications, and practice (IJIE2006), pp 378–383

  35. Uratani K, Machida T, Kiyokawa K, Takemura H (2005) A study of depth visualization techniques for virtual annotations in augmented reality. In: Proceedings of virtual reality, 2005 (VR’05). IEEE, pp 295–296

  36. Urey H, Chellappan KV, Erden E, Surman P (2011) State of the art in stereoscopic and autostereoscopic displays. Proc IEEE 99(4):540–555

    Article  Google Scholar 

  37. Van DN, Mashita T, Kiyokawa K, Takemura H (2012) Subjective evaluations on perceptual depth of stereo image and effective field of view of a wide-view head mounted projective display with a semi-transparent retro-reflective screen. In: IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 327–328

  38. Woods RL, Fetchenheuer I, Vargas-Martín F, Peli E (2003) The impact of non-immersive head-mounted displays (HMDs) on the visual field. J Soc Inf Disp 11(1):191-1

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank all 38 experiment volunteers for their time, and DFKI and Osaka University for their continued collaboration. This research was funded in part by Grant-in-Aids for Scientific Research (B), #24300048 and #A15J030230 from the Japan society for the Promotion of Science (JSPS), Japan and by the Kognit Project which is supported by the German Federal Ministry of Education and Research (BMBF).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jason Orlosky.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Orlosky, J., Toyama, T., Sonntag, D. et al. The Role of Focus in Advanced Visual Interfaces. Künstl Intell 30, 301–310 (2016). https://doi.org/10.1007/s13218-015-0411-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-015-0411-y

Keywords

Navigation