Skip to main content

HaptMR: Smart Haptic Feedback for Mixed Reality Based on Computer Vision Semantic

  • Conference paper
  • First Online:
Virtual, Augmented and Mixed Reality (HCII 2021)

Abstract

This paper focuses on tactile feedback based on semantic analysis using deep learning algorithms on the mobile Mixed Reality (MR) device, called HaptMR. This way, we improve MR’s immersive experience and reach a better interaction between the user and real/virtual objects. Based on the Mixed Reality device HoloLens 2. generation (HL2), we achieve a haptic feedback system that utilizes the hand tracking system on HL2 and fine haptic modules on hands. Furthermore, we adapt the deep learning model – Inception V3 to recognize the rigidity of objects. According to the scenes’ semantic analysis, when users make gestures or actions, their hands can receive force feedback similar to the real haptic sense. We conduct a within-subject user study to test the feasibility and usability of HaptMR. In user study, we design two tasks, including hand tracking and spatial awareness, and then, evaluate the objective interaction experience (Interaction Accuracy, Algorithm Accuracy, Temporal Efficiency) and the subjective MR experience (Intuitiveness, Engagement, Satisfaction). After visualizing results and analyzing the user study, we conclude that the HaptMR system improves the immersive experience in MR. With HaptMR, we could fill users’ sense of inauthenticity produced by MR. HaptMR could build applications on industrial usage, spatial anchor, virtual barrier, 3D semantic interpretation, and as a foundation of other implementations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhang, Z., Cao, B., Weng, D., Liu, Y., Wang, Y., Huang, H.: Evaluation of hand-based interaction for near-field mixed reality with optical see-through head-mounted displays. In: VR 2018 - Proceedings of 25th IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 739–740 (2018). https://doi.org/10.1109/VR.2018.8446129

  2. Knoerlein, B., Székely, G., Harders, M.: Visuo-haptic collaborative augmented reality ping-pong. ACM Int. Conf. Proc. Ser. 203, 91–94 (2007). https://doi.org/10.1145/1255047.1255065

    Article  Google Scholar 

  3. Hughes, C.E., Stapleton, C.B., Hughes, D.E., Smith, E.M.: Mixed reality in education, entertainment, and training. IEEE Comput. Graph. Appl. 25(6), 24–30 (2005). https://doi.org/10.1109/MCG.2005.139

    Article  Google Scholar 

  4. Maman, K., et al.: Comparative efficacy and safety of medical treatments for the management of overactive bladder: a systematic literature review and mixed treatment comparison. Eur. Urol. 65(4), 755–765 (2014). https://doi.org/10.1016/j.eururo.2013.11.010

    Article  Google Scholar 

  5. Feiner, S.K.: Augmented reality: a new way of seeing. Sci. Am. 286(4), 48 (2002). https://doi.org/10.1038/scientificamerican0402-48

    Article  Google Scholar 

  6. Flavián, C., Ibáñez-Sánchez, S., Orús, C.: The impact of virtual, augmented and mixed reality technologies on the customer experience. J. Bus. Res. 100(January 2018), 547–560 (2019). https://doi.org/10.1016/j.jbusres.2018.10.050

    Article  Google Scholar 

  7. Ping, J., Liu, Y., Weng, D.: Comparison in depth perception between virtual reality and augmented reality systems. In: VR 2019 - Proceedings of 26th IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 1124–1125 (2019). https://doi.org/10.1109/VR.2019.8798174

  8. Hettiarachchi, A., Wigdor, D.: Annexing reality: enabling opportunistic use of everyday objects as tangible proxies in augmented reality. In: Conference on Human Factors in Computing Systems - Proceedings, pp. 1957–1967 (2016). https://doi.org/10.1145/2858036.2858134

  9. Yokokohji, Y., Hollis, R.L., Kanade, T.: What you can see is what you can feel - development of a visual/haptic interface to visual environment. In: Proceedings - Virtual Reality Annual International Symposium, pp. 46–53 (1996). https://doi.org/10.1109/vrais.1996.490509

  10. Di Diodato, L.M., Mraz, R., Baker, S.N., Graham, S.J.: A haptic force feedback device for virtual reality-fMRI experiments. IEEE Trans. Neural Syst. Rehabil. Eng. 15(4), 570–576 (2007). https://doi.org/10.1109/TNSRE.2007.906962

    Article  Google Scholar 

  11. Laboratoty, I.: SPIDAR and Virtual Reality, pp. 17–24 (n.d.)

    Google Scholar 

  12. Gupta, S., Morris, D., Patel, S.N., Tan, D.: AirWave: non-contact haptic feedback using air vortex rings. In: UbiComp 2013 - Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 419–428 (2013). https://doi.org/10.1145/2493432.2493463

  13. Sodhi, R., Poupyrev, I., Glisson, M., Israr, A.: AIREAL: interactive tactile experiences in free air. ACM Trans. Graph. 32(4) (2013). https://doi.org/10.1145/2461912.2462007

  14. Suzuki, Y., Kobayashi, M.: Air jet driven force feedback in virtual reality. IEEE Comput. Graph. Appl. 25(1), 44–47 (2005). https://doi.org/10.1109/MCG.2005.1

    Article  Google Scholar 

  15. Ochiai, Y., Kumagai, K., Hoshi, T., Rekimoto, J., Hasegawa, S., Hayasaki, Y.: Fairy lights in femtoseconds: aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields. ACM Trans. Graph. 35(2), 680309 (2016). https://doi.org/10.1145/2850414

    Article  Google Scholar 

  16. Microsoft Mixed Reality Homepage. https://docs.microsoft.com/zh-cn/windows/mixed-reality/discover/mixed-reality

  17. Microsoft Mixed Reality Toolkit Homepage. https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/HandTracking.html

  18. Microsoft Homepage. https://docs.microsoft.com/zh-cn/windows/mixed-reality/design/interaction-fundamentals

  19. Peng, H., et al.: Roma: interactive fabrication with augmented reality and a robotic 3D printer. In: Conference on Human Factors in Computing Systems - Proceedings (Vol. 2018-April) (2018). https://doi.org/10.1145/3173574.3174153

  20. Song, H., Guimbretière, F., Hu, C., Lipson, H.: ModelCraft: capturing freehand annotations and edits on physical 3D models. In: UIST 2006: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, pp. 13–22 (2008)s. https://doi.org/10.1145/1166253.1166258

  21. Follmer, S., Carr, D., Lovell, E., Ishii, H.: CopyCAD: remixing physical objects with copy and paste from the real world. In: UIST 2010 - 23rd ACM Symposium on User Interface Software and Technology, Adjunct Proceedings, pp. 381–382 (2010). https://doi.org/10.1145/1866218.1866230

  22. Adafruit. Adafruit DRV2605 Haptic Controller Breakout (2014)

    Google Scholar 

  23. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07–12-June-2015, pp. 1–9 (2015). https://doi.org/10.1109/CVPR.2015.7298594

  24. Joseph, S.: Australian literary journalism and “missing voices”: how Helen garner finally resolves this recurring ethical tension. J. Pract. 10(6), 730–743 (2016). https://doi.org/10.1080/17512786.2015.1058180

    Article  Google Scholar 

  25. Qian, Y., et al.: Fresh tea leaves classification using inception-V3. In: 2019 2nd IEEE International Conference on Information Communication and Signal Processing, ICICSP 2019, pp. 415–419 (2019). https://doi.org/10.1109/ICICSP48821.2019.8958529

  26. Wang, C., et al.: Pulmonary image classification based on inception-V3 transfer learning model. IEEE Access 7, 146533–146541 (2019). https://doi.org/10.1109/ACCESS.2019.2946000

    Article  Google Scholar 

  27. Kin, N.G.: Tuned Inception V3 for Recognizing States of Cooking Ingredients (2019). ArXiv. https://doi.org/10.32555/2019.dl.009

  28. Demir, A., Yilmaz, F., Kose, O.: Early detection of skin cancer using deep learning architectures: Resnet-101 and inception-v3. In: TIPTEKNO 2019 - Tip Teknolojileri Kongresi, 3–6 January 2019 (2019). https://doi.org/10.1109/TIPTEKNO47231.2019.8972045

  29. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-December, pp. 2818–2826 (2016). https://doi.org/10.1109/CVPR.2016.308

  30. Xiao, X.: The evolutionary history of the Inception model: from GoogLeNet to Inception-ResNet. In: ZHIHU Zhuanlan Forum. https://zhuanlan.zhihu.com/p/50754671

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yueze Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Liang, R., Sun, Z., Koch, M. (2021). HaptMR: Smart Haptic Feedback for Mixed Reality Based on Computer Vision Semantic. In: Chen, J.Y.C., Fragomeni, G. (eds) Virtual, Augmented and Mixed Reality. HCII 2021. Lecture Notes in Computer Science(), vol 12770. Springer, Cham. https://doi.org/10.1007/978-3-030-77599-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77599-5_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77598-8

  • Online ISBN: 978-3-030-77599-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics