Skip to main content
Log in

FIGI: floating interface for gesture-based interaction

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Mixed reality represents a promising technology for a wide range of applicative fields, including computer based training, systems maintenance and medical imaging, just to name a few. The floating interface for gesture-based interaction architecture presented in this study, puts together a context adaptive head-up interface, which is projected in the central region of the user’s visual field, with gesture-based interaction, to enable easy, robust and powerful manipulation of the virtual contents which are visualized after being mapped onto the real environment surrounding the user. The interaction paradigm combines one-hand, two-hands and time-based gestures to select tools/functions among those available as well as to operate them. Even conventional keyboard-based functions like typing, can be performed without a physical interface by means of a floating keyboard layout. The paper describes the overall system architecture and its application to the interactive visualization of tri-dimensional models of human anatomy, for either training or educational purposes. We also report the results of an evaluation study to assess usability, effectiveness and eventual limitations of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Alur R, Dill DL (1994) A theory of timed automata. J Theor Comput Sci 126(2):183–235

    Article  MATH  MathSciNet  Google Scholar 

  • Alur R, Courcoubetis C, Dill DL, (1990). Model checking for real-time systems. In: Proceedings of the 5th annual symposium on logic in computer science. IEEE Computer Society Press, New York, pp 414–425

  • Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B, (2001) Recent advances in augmented reality. IEEE Comput Graph Appl 21(6):34–47

    Google Scholar 

  • Balcisoy S, Kallmann M, Fua P, Thalmann D (2000) A framework for rapid evaluation of prototypes with augmented reality. In: Proceedings of the ACM symposium on virtual reality software and technology, Seoul, Korea, pp 61–66

  • Bolt R (1980) “Put-that-there”: Voice and gesture at the graphics interface. SIGGRAPH Comput Graph 14(3):262–270

  • Bottoni P, De Marsico M, Levialdi S, Ottieri G, Pierro M, Quaresima D (2009) A dynamic environment for video surveillance. In: T Gross et al. (eds) Proceedings INTERACT 2009, Part II, LNCS 5727, pp 892–895

  • Bricken M (1991) Virtual reality learning environments: potentials and challenger. ACM Comput Graph 25(3):178–184

    Article  Google Scholar 

  • Brooks FP (1999) What’s real about virtual reality? IEEE Comput Graphics Appl 19(6):16–27

    Article  MathSciNet  Google Scholar 

  • Dangelmaier W, Fischer M, Gausemeier J, Grafe M, Matysczok C, Mueck B, (2005) Virtual and augmented reality support for discrete manufacturing system simulation. Comput Indus 56(4):371–383

    Google Scholar 

  • Dix A, Finlay J, Abowd G, Beale R (2004) Human-computer interaction. Prentice Hall, Third Edition

    Google Scholar 

  • Duke DJ (1995) Reasoning about gestural interaction. ACM/Eurographics 95 14(3):55–66

    Google Scholar 

  • Graetzel C, Fong T, Grange S, Baur C (2004) A non-contact mouse for surgeon-computer interaction. Technol Health Care (IOS Press) 12(3):245–257

    Google Scholar 

  • Green TRG (1989) Cognitive dimensions of notations. In: Proceedings of HCI 89. Cambridge University Press, Cambridge, pp 443–460

  • Green TRG, Blackwell AF (1998) Cognitive dimensions of information artefacts: a tutorial. Tutorial at HCI’98. Available at http://128.232.0.20/~afb21/CognitiveDimensions/CDtutorial.pdf (July 2012)

  • Green TRG, Petre M (1996) Usability analysis of visual programming environments: a ‘Cognitive Dimensions’ framework. J Visual Lang Comput 7(1996):131–174

    Article  Google Scholar 

  • International Organisation for Standardisation (1998) ISO 9241: software ergonomics requirements for office work with visual display terminal (VDT). International Organisation for Standardisation, Geneva

    Google Scholar 

  • Jaimes A, Sebe N (2007) Multimodal human-computer interaction: a survey. Comput Vis Image Underst 108(2007):116–134

    Article  Google Scholar 

  • Kaltenborn K-F, Rienhoff O (1993) Virtual reality in medicine. Methods Inf Med 32(5):407–417

    Google Scholar 

  • Karam M (2006) A framework for research and design of gesture-based human-computer interactions. PhD. Thesis, University of Southampton. Available at http://eprints.soton.ac.uk/263149/ (July 2012)

  • Kohler M, Schroter S (1998) A survey of video-based gesture recognition: stereo and mono systems. Technical Report 693, Informatik VII, University of Dortmund

  • Kölsch M, Turk M. (2002) Keyboards without keyboards: a survey of virtual keyboards. In: Proceedings of Workshop on Sensing and Input for Media-centric Systems, 2002

  • Krapichler C, Haubner M, Lösch A, Englmeier K (1997) Human-machine interface for medical image analysis and visualization in virtual environments. In: IEEE conference on Acoustics, Speech and Signal Processing, ICASSP-97, vol 4, pp 21–24

  • Livingston MA, Rosenblum LJ, Julier SJ, Brown D, Baillot Y, Swan JE II, Gabbard JL, Hix D (2002) An augmented reality system for military operations in urban terrain. In: The Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) 2002

  • Nielsen J (1993) Usability engineering. Academic Press, Cambridge

    MATH  Google Scholar 

  • Oviatt S (2003) Multimodal interfaces. In: Jacko J, Sears A (eds) The human-computer interaction handbook: fundamentals, evolving technologies, and emerging applications. Lawrence Erlbaum, New Jersey

    Google Scholar 

  • Oviatt S, Darrell T, Flickner M (2004) Multimodal interfaces that flex, adapt and persist. Commun ACM 47(1):30–33

    Article  Google Scholar 

  • Poupyrev I, Tan DS, Billinghurst M, Kato H, Regenbrecht H, Tetsutani N (2002) Developing a generic augmented-reality interface. Computer 35(3):44–50

    Google Scholar 

  • Reeves LM, Lai J, Larson JA, Oviatt S, Balaji TS, Buisine S, Collings P, Cohen P, Kraal B, Martin JC, McTear M, Raman TV, Stanney M, Su H, Ying Wang Q (2004) Guidelines for multimodal user interface design. Commun ACM 47(1):57–59

    Article  Google Scholar 

  • Smith R (2000) ODE: Open Dynamics Engine. http://www.ode.org

  • Stern H, Wachs J, Edan Y (2006) Optimal hand gesture vocabulary design using psycho-physiological and technical factors. In: 7th International conference on automatic face and gesture recognition, FG2006

  • Sutcliffe AG, Deol Kaur K (2000) Evaluating the usability of virtual reality user interfaces. Behav Inform Technol 19(6):415–426

    Article  Google Scholar 

  • Tani BS, Maia RS, Von Wangenheim A (2007) A gesture interface for radiological workstations. In: Proceedings of the Twentieth IEEE International Symposium on Computer-Based Medical Systems

  • Van Dam A (1997) Post-WIMP user interfaces. Commun ACM 40(2):63–67

    Article  MathSciNet  Google Scholar 

  • Wachs J, Stern H, Edan Y, Gillam M, Feied C, Smith M, Handler J (2007) Gestix: a doctor-computer sterile gesture interface for dynamic environments. In: Saad A et al (eds) Soft computing in industrial applications. Springer, ASC 39, pp 30–39

  • Green TRG (1991) Describing information artefacts with cognitive dimensions and structure maps. In: Diaper D, Hammond NV (eds) Proceedings of “HCI’91: Usability Now”, Annual conference of BCS human-computer interaction group. Cambridge University Press, Cambridge

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Ricciardi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

De Marsico, M., Levialdi, S., Nappi, M. et al. FIGI: floating interface for gesture-based interaction. J Ambient Intell Human Comput 5, 511–524 (2014). https://doi.org/10.1007/s12652-012-0160-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-012-0160-9

Keywords

Navigation