Skip to main content
Log in

CUDA-based real-time hand gesture interaction and visualization for CT volume dataset using leap motion

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Touchless interaction has received considerable attention in recent years with benefit of removing barriers of physical contact. Several approaches are available to achieve mid-air interactions. However, most of these techniques cause discomfort when the interaction method is not direct manipulation. In this paper, gestures based on unimanual and bimanual interactions with different tools for exploring CT volume dataset are designed to perform the similar tasks in realistic applications. Focus + context approach based on GPU volume ray casting by trapezoid-shaped transfer function is used for visualization and the level-of-detail technique is adopted for accelerating interactive rendering. Comparing the effectiveness and intuitiveness of interaction approach with others by experiments, ours has a better performance and superiority with less completion time. Moreover, the bimanual interaction with more advantages is timesaving when performing continuous exploration task.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Gallo, L.: A study on the degrees of freedom in touchless interaction. SIGGRAPH Asia 2013 Technical Briefs. ACM, pp. 1–4 (2013)

  2. Laha, B., Sensharma, K., Schiffbauer, J.D., Bowman, D.A.: Effects of immersion on visual analysis of volume data. IEEE Trans. Vis. Comput. Graph. 18(4), 597–606 (2012)

    Article  Google Scholar 

  3. Hanqi, G., Ningyu, M., Xiaoru, Y.: WYSIWYG (what you see is what you get) volume visualization. Vis. Comput. Graph. IEEE Trans. 17(12), 2106–2114 (2011)

    Article  Google Scholar 

  4. O’Hara, K., Gonzalez, G., Sellen, A., Penney, G., Varnavas, A., Mentis, H., Criminisi, A., Corish, R., Rouncefield, M., Dastur, N., Carrell, T.: Touchless interaction in surgery. Commun. ACM 57(1), 70–77 (2014)

    Article  Google Scholar 

  5. Vanacken, L., Grossman, T., Coninx, K.: Exploring the effects of environment density and target visibility on object selection in 3D virtual environments. In: 3DUI ’07. IEEE Symposium on (2007)

  6. Bérard, F., Ip, J., Benovoy, M., El-Shimy, D., Blum, J.R., Cooperstock, J.R.: Did “Minority Report” get it wrong? Superiority of the mouse over 3D input devices in a 3D placement task. In: Human-computer interaction-INTERACT, pp. 400–414 (2009)

  7. McInerney, T., Broughton, S.: Hingeslicer: interactive exploration of volume images using extended 3D slice plane widgets. In: Proc. Graphics Interface, Canadian Information Processing Society, pp. 171–178 (2006)

  8. Zhang, Q., Eagleson, R., Peters, T.M.: Rapid scalar value classification and volume clipping for interactive 3D medical image visualization. Vis. Comput. 27(1), 3–19 (2011)

    Article  Google Scholar 

  9. Bruckner, S., Groller, M.E.: VolumeShop: an interactive system for direct volume illustration. In: Proceedings of the IEEE visualization, pp. 671–678 (2005)

  10. Ropinski, T., Steinicke, F., Hinrichs, K.H.: Tentative results in focus-based medical volume visualization. In: Proceedings of the 5th international symposium on smart graphics, Lecture notes in computer science, vol. 3638. Springer-Verlag, pp. 218–221 (2006)

  11. Monclús, E., Díaz, J., Navazo, I., Vázquez, P.P.: The virtual magic lantern: an interaction metaphor for enhanced medical data inspection. In: VRST 2009, pp. 119–122 (2009)

  12. Andújar, C., Navazo, I., Vázquez, P.P.: The ViRVIG institute. SBC J. 3D Interact. Syst. 2(2), 2–5 (2011)

    Google Scholar 

  13. Bruckner, S., Groller, M.E.: Style transfer functions for illustr ative volume rendering. Comput. Graph. Forum 26(3), 715–724 (2007)

    Article  Google Scholar 

  14. Díaz, J., Vázquez, P.P.: Depth-enhanced maximum intensity projection. Vol. Graph. pp. 93–100 (2010)

  15. Guiard, Y.: Asymmetric division of labor in human skilled bimanual action: the kinematic chain as a model. J. Mot. Behav. 19(4), 486–517 (1987)

    Article  Google Scholar 

  16. Leganchuk, A., Zhai, S., Buxton, W.: Manual and cognitive benefits of two-handed input: an experimental study. ACM Trans. Comput. Hum. Interact. 5(4), 326–359 (1998)

    Article  Google Scholar 

  17. Owen, R., Kurtenbach, G., Fitzmaurice, G., Baudel, T., Buxton, B.: When it gets more difficult, use both hands: exploring bimanual curve manipulation. In: Proceedings of Graphics Interface 2005, Victoria, British Columbia, 2005. Canadian Human-Computer Communications Society, pp. 17–24 (2005)

  18. Latulipe, C., Kaplan, C. S., Clarke, C. L. A.: Bimanual and unimanual image alignment: an evaluation of mouse-based techniques. In: Proceedings of the 18th annual ACM symposium on User interface software and technology, Seattle, 2005. ACM, pp. 123–131 (2005)

  19. Brandl, P., Forlines, C., Wigdor, D., Haller, M., Shen, C.: Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces. In: Proceedings of the working conference on advanced visual interfaces, Napoli, 2008. ACM, pp. 154–161 (2008)

  20. Ullrich, S., Knott, T., Law, Y.C., Grottke, O., Kuhlen, T.: Influence of the bimanual frame of reference with haptics for unimanual interaction tasks in virtual environments3D user interfaces (3DUI). In: 2011 IEEE Symposium on Singapore, pp. 39–46 (2011)

  21. Song, P., Goh, W.B., Hutama, W., Fu, C., Liu, X.: A handle bar metaphor for virtual object manipulation with mid-air interaction. In: Proceedings of the SIGCHI conference on human factors in computing systems, Austin, 2012. ACM, pp. 1297–1306 (2012)

  22. Jalaliniya, S., Smith, J., Sousa, M., Büthe, L., Pederson, T.: Touch-less interaction with medical images using hand & foot gestures. In: Proceedings of the 2013 ACM conference on pervasive and ubiquitous computing adjunct publication, Zurich, 2013. ACM, pp. 1265–1274 (2013)

  23. Grossman, T., Wigdor, D., Balakrishnan, R.: Multi-finger gestural interaction with 3d volumetric displays. In: Proceedings of the 17th annual ACM symposium on user interface software and technology, Santa Fe, 2004. ACM, pp. 61–70 (2004)

  24. Laha, B., Bowman, D.A.: Design of the bare-hand volume cracker for analysis of raw volumetric data. In: IEEE VR 2014 workshop on immersive volumetric interaction (WIVI). IEEE (2014)

  25. Wigdor, D., Wixon, D.: Brave NUI world: designing natural user interfaces for touch and gesture. Elsevier, pp. 65–72 (2011)

  26. Jacob, R.J., Girouard, A., Hirshfield, L.M., Horn, M.S., Shaer, O., Solovey, E.T., Zigelbaum, J.: Reality-based interaction: a framework for post-WIMP interfaces. In: Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, pp. 201–210(2008)

  27. Wang, R.Y., Popović, J.: Real-time hand-tracking with a color glove. ACM Trans. Graph. (TOG) 28(3), 63 (2009)

    Google Scholar 

  28. Luo, Y.: Distance-based focus + context models for exploring large volumetric medical datasets. Comput. Sci. Eng. 14(5), 63–71 (2012)

    Article  Google Scholar 

  29. Buxton, W.: A three-state model of graphical input. Hum. Comput. Interact. INTERACT 90, 449–456 (1990)

    Google Scholar 

  30. Levoy, M.: Display of surfaces from volume data. Comput. Graph. Appl. IEEE 8(3), 29–37 (1988)

    Article  Google Scholar 

  31. Cohen, M., Brodlie K.: Focus and context for volume visualization. In: Theory and practice of computer graphics, 2004. Proceedings IEEE, pp. 32–39 (2004)

  32. Barr, A.H.: Superquadrics and angle-preserving transformations. IEEE Comput. Graph. Appl. 1(1), 11–23 (1981)

    Article  Google Scholar 

  33. Gobbetti, E., Marton, F., Guitián, J.A.I.: A single-pass GPU ray casting framework for interactive out-of-core rendering of massive volumetric datasets. Vis. Comput. 24(7–9), 797–806 (2008)

    Article  Google Scholar 

  34. Gobbetti, E., Iglesias, Guitián, J.A., Marton, F.: COVRA: a compression domain output sensitive volume rendering architecture based on a sparse representation of voxel blocks. Comput. Graph. Forum. 31(3pt4), 1315–1324. Blackwell Publishing Ltd (2012)

  35. Siemens. Game console technology in the operating room. https://www.siemens.be/cmc/newsletters/index.aspx?id=54-3736#art_39901

  36. Laursen, L.F., Bærentzen, J.A., Igarashi, T., Petersen, M.K., Clemmensen, L.K.H., Ersbøll, B.K., Christensen, L.B.: PorkCAD: case study of the design of a pork product prototyper. In: 5th international congress of international association of societies of design research, pp. 1134–1 (2013)

Download references

Acknowledgments

This work is partially supported by National Natural Science Foundation of China (Nos. 61170170, 61271366 and 61472042), Beijing Natural Science Foundation (No. 4152028) and Fundamental Research Funds for the Central Universities (Nos. 2013YB70 and 2015KJJCB25).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanlin Luo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, J., Luo, Y., Wu, Z. et al. CUDA-based real-time hand gesture interaction and visualization for CT volume dataset using leap motion. Vis Comput 32, 359–370 (2016). https://doi.org/10.1007/s00371-016-1209-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-016-1209-0

Keywords

Navigation