Skip to main content
Log in

VRCAT: VR collision alarming technique for user safety

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The rapid advancement of virtual reality (VR) head-mounted displays (HMDs) has made it possible to experience immersive VR at home. However, such immersion inevitably disconnects users from reality and puts their safety at risk. We suggest a VR Collision Alarming Technique (VRCAT) to overcome this problem. The fundamental idea is to use an RGB camera to identify physical obstacles around VR users and warn them about potential collisions via the HMD. We built VRCAT on a smartphone platform that is readily available to the general public to reduce learning expenses and boost accessibility to our system. To validate whether the VRCAT improves user safety and is easy to use, we ran an evaluation test and an application test. The evaluation test reveals that VRCAT can be installed by novice users in about a minute. It also demonstrates that VRCAT can estimate the 3D positions of the user and obstacles with an error of 5–7 cm every 0.09 s. The application test conducted in real-world scenarios reveals that VRCAT successfully improved user safety without compromising the user’s attention and performance on VR tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)

    Article  Google Scholar 

  2. Atev, S., Arumugam, H., Masoud, O., Janardan, R., Papanikolopoulos, N.P.: A vision-based approach to collision prediction at traffic intersections. IEEE Trans. Intell. Transp. Syst. 6(4), 416–423 (2005)

    Article  Google Scholar 

  3. Beery, S., Wu, G., Rathod, V., Votel, R., Huang, J.: Context R-CNN: Long term temporal context for per-camera object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13075–13085 (2020)

  4. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

  5. Cheng, L.P., Ofek, E., Holz, C., Wilson, A.D.: Vroamer: generating on-the-fly VR experiences while walking inside large, unknown real-world building environments. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 359–366. IEEE (2019)

  6. Cirio, G., Marchal, M., Regia-Corte, T., Lécuyer, A.: The magic barrier tape: a novel metaphor for infinite navigation in virtual worlds with a restricted walking workspace. In: Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, pp. 155–162 (2009)

  7. Cirio, G., Vangorp, P., Chapoulie, E., Marchal, M., Lécuyer, A., Drettakis, G.: Walking in a cube: novel metaphors for safely navigating large virtual environments in restricted real workspaces. IEEE Trans. Vis. Comput. Gr. 18(4), 546–554 (2012)

    Article  Google Scholar 

  8. Ester, M., Kriegel, H.P., Sander, J., Xu, X., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Kdd, vol. 96, pp. 226–231 (1996)

  9. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  10. Garg, R., Wadhwa, N., Ansari, S., Barron, J.T.: Learning single camera depth estimation using dual-pixels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7628–7637 (2019)

  11. Ge, Z., Liu, S., Wang, F., Li, Z., Sun, J.: Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430 (2021)

  12. Godard, C., Mac Aodha, O., Firman, M., Brostow, G.J.: Digging into self-supervised monocular depth estimation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3828–3838 (2019)

  13. Hartmann, J., Holz, C., Ofek, E., Wilson, A.D.: Realitycheck: blending virtual environments with situated physical reality. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2019)

  14. Henrich, D., Gecks, T.: Multi-camera collision detection between known and unknown objects. In: 2008 Second ACM/IEEE International Conference on Distributed Smart Cameras, pp. 1–10. IEEE (2008)

  15. Huang, S., Bai, H., Mandalika, V., Lindeman, R.W.: Improving virtual reality safety precautions with depth sensing. In: Proceedings of the 30th Australian Conference on Computer-Human Interaction, pp. 528–531 (2018)

  16. Jerald, J.: The VR Book: Human-Centered Design for Virtual Reality. Morgan & Claypool, San Rafael, California (USA) (2015)

    Book  Google Scholar 

  17. Kalsotra, R., Arora, S.: Background subtraction for moving object detection: explorations of recent developments and challenges. Vis. Comput. (2021). https://doi.org/10.1007/s00371-021-02286-0

  18. Kanamori, K., Sakata, N., Hijikata, Y., Harada, K., Kiyokawa, K., et al.: Walking assist method for VR zombie. In: 2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR), pp. 1–7. IEEE (2019)

  19. Kanamori, K., Sakata, N., Tominaga, T., Hijikata, Y., Harada, K., Kiyokawa, K.: Obstacle avoidance method in real space for virtual reality immersion. In: 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 80–89. IEEE (2018)

  20. Kang, H., Han, J.: SafeXR: alerting walking persons to obstacles in mobile XR environments. Vis. Comput. 36(10), 2065–2077 (2020)

  21. Kendall, A., Grimes, M., Cipolla, R.: Posenet: a convolutional network for real-time 6-DOF camera relocalization. In: Proceedings of the IEEE International Conference on Computer vision, pp. 2938–2946 (2015)

  22. Kennedy, R.S., Lane, N.E., Berbaum, K.S., Lilienthal, M.G.: Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3(3), 203–220 (1993)

    Article  Google Scholar 

  23. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234. IEEE (2007)

  24. Liu, H., Wang, Z., Mazumdar, A., Mousas, C.: Virtual reality game level layout design for real environment constraints. Gr. Vis. Comput. 4, 200020 (2021)

    Google Scholar 

  25. Mahjourian, R., Wicke, M., Angelova, A.: Unsupervised learning of depth and ego-motion from monocular video using 3D geometric constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5667–5675 (2018)

  26. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  27. Nescher, T., Huang, Y.Y., Kunz, A.: Planning redirection techniques for optimal free walking experience using model predictive control. In: 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 111–118. IEEE (2014)

  28. Nescher, T., Zank, M., Kunz, A.: Simultaneous mapping and redirected walking for ad hoc free walking in virtual environments. IEEE (2016)

  29. Oculus: The oculus guardian system

  30. Peck, T.C., Fuchs, H., Whitton, M.C.: Improved redirection with distractors: a large-scale-real-walking locomotion interface and its effect on navigation in virtual environments. In: 2010 IEEE Virtual Reality Conference (VR), pp. 35–38. IEEE (2010)

  31. Rauter, M., Abseher, C., Safar, M.: Augmenting virtual reality with near real world objects. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 1134–1135. IEEE (2019)

  32. Razzaque, S., Kohn, Z., Whitton, M.C.: Redirected Walking. Citeseer

  33. Razzaque, S., Swapp, D., Slater, M., Whitton, M.C., Steed, A.: Redirected walking in place. In: EGVE, vol. 2, pp. 123–130 (2002)

  34. Saunier, N., Sayed, T., Lim, C.: Probabilistic collision prediction for vision-based automated road safety analysis. In: 2007 IEEE Intelligent Transportation Systems Conference, pp. 872–878. IEEE (2007)

  35. Schubert, T., Friedmann, F., Regenbrecht, H.: The experience of presence: factor analytic insights. Presence Teleoper. Virtual Environ. 10(3), 266–281 (2001)

    Article  Google Scholar 

  36. Sekiyama, N., Minoura, K., Watanabe, T.: Prediction of collisions between vehicles using attainable region. In: Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication, pp. 1–6 (2011)

  37. Shapira, L., Freedman, D.: Reality skins: Creating immersive and tactile virtual environments. In: 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 115–124. IEEE (2016)

  38. Simeone, A.L., Velloso, E., Gellersen, H.: Substitutional reality: using the physical environment to design virtual reality experiences. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3307–3316 (2015)

  39. Sousa, M., Mendes, D., Jorge, J.: Safe walking in VR. In: The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, pp. 1–2 (2019)

  40. Sousa, M., Mendes, D., Jorge, J.: Safe walking in VR using augmented virtuality. arXiv preprint arXiv:1911.13032 (2019)

  41. Sra, M., Garrido-Jurado, S., Schmandt, C., Maes, P.: Procedurally generated virtual reality from 3D reconstructed physical space. In: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, pp. 191–200 (2016)

  42. Suma, E.A., Lipps, Z., Finkelstein, S., Krum, D.M., Bolas, M.: Impossible spaces: maximizing natural walking in virtual environments with self-overlapping architecture. IEEE Trans. Vis. Comput. Gr. 18(4), 555–564 (2012)

    Article  Google Scholar 

  43. Sun, Q., Patney, A., Wei, L.Y., Shapira, O., Lu, J., Asente, P., Zhu, S., McGuire, M., Luebke, D., Kaufman, A.: Towards virtual reality infinite walking: dynamic saccadic redirection. ACM Trans. Gr. (TOG) 37(4), 1–13 (2018)

    Article  Google Scholar 

  44. Valentini, I., Ballestin, G., Bassano, C., Solari, F., Chessa, M.: Improving obstacle awareness to enhance interaction in virtual reality. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 44–52. IEEE (2020)

  45. Valve Corporation: Chaperone (U.S. Patent 86558185, Mar.09, 2015)

  46. Vasylevska, K., Kaufmann, H., Bolas, M., Suma, E.A.: Flexible spaces: dynamic layout generation for infinite walking in virtual environments. In: 2013 IEEE Symposium on 3D User Interfaces (3DUI), pp. 39–42. IEEE (2013)

  47. Welch, G., Bishop, G., et al.: An introduction to the Kalman filter (1995)

  48. Wei, L., Cui, W., Hu, Z. et al. A single-shot multi-level feature reused neural network for object detection. Vis. Comput. 37, 133–142 (2021)

  49. Williams, B., Narasimham, G., Rump, B., McNamara, T.P., Carr, T.H., Rieser, J., Bodenheimer, B.: Exploring large virtual environments with an HMD when physical space is limited. In: Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization, pp. 41–48 (2007)

  50. Wu, F., Rosenberg, E.S.: Combining dynamic field of view modification with physical obstacle avoidance. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 1882–1883. IEEE (2019)

  51. Xie, X., Lin, Q., Wu, H., Narasimham, G., McNamara, T.P., Rieser, J., Bodenheimer, B.: A system for exploring large virtual environments that combines scaled translational gain and interventions. In: Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization, pp. 65–72 (2010)

Download references

Funding

This work was supported by a grant from Kyung Hee University in 2020 (KHU-20201110) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2020R1F1A1076528).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to HyeongYeop Kang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mp4 71100 KB)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chung, S., Lee, T., Jeong, B. et al. VRCAT: VR collision alarming technique for user safety. Vis Comput 39, 3145–3159 (2023). https://doi.org/10.1007/s00371-022-02676-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02676-y

Keywords

Navigation