Skip to main content

Advertisement

Log in

AI driven human–computer interaction design framework of virtual environment based on comprehensive semantic data analysis with feature extraction

  • Published:
International Journal of Speech Technology Aims and scope Submit manuscript

Abstract

The most basic and important attributes of virtual reality (VR) are immersion, interactivity and creativity. For this young media, the continuous development and change and the exploration of content are the current situation of VR art. The arrival of 5G communication technology provides an important foundation for the popularization of VR technology. The rapid development of technology and hardware rewrites and expands the possibility of VR art design expression at any time. As the latest technology of natural human–computer interaction, somatosensory interaction technology enables users to interact with computers directly through body movements, gestures, etc., and control the environment at will. Its core value lies in that it not only enables the computer to have more accurate and effective “eyes” to observe the world, not only to understand human body language, but also to complete various instructions according to the body language, to achieve real-time interaction with people. Gesture, as a common communication mode with the ability of information expression, has the advantages of intuitionistic, natural and easy to understand. Therefore, the somatosensory interaction technology based on gesture recognition has become an effective means of natural human–computer interaction. With the rapid development of key technologies of VR, VR has been widely used in games, fitness, real estate, education, film and television and other industries. In recent years, VR systems are everywhere, including virtual home decoration system, virtual panoramic VR live broadcasting system, virtual fitness system, etc. The design and implementation of this VR system mainly includes four parts: scene modeling, rendering and rendering, performance and interaction, and network application. Based on this, this paper uses the comprehensive data analysis technology to build the human–computer interaction design model in the virtual environment. The AI and speech information are combined to perform the optimal analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Abraham, A., & Nath, B. (2001). A neuro-fuzzy approach for modelling electricity demand in Victoria. Applied Soft Computing, 1, 127–138.

    Article  Google Scholar 

  • Alkhamisi, A. O., & Saleh, M. (2020). Ontology opportunities and challenges: Discussions from semantic data integration perspectives. In 2020 6th Conference on data science and machine learning applications (CDMA) (pp. 134–140). IEEE.

  • Bin, L., & Feng, G. (2012). Human computer interaction: Software engineering perspective. China Machine Press.

  • Blake, J. (2011). Natural user interface in NET. Manning Publications Co.

  • Cai, L. Q., Yang, Z., Yang, S. X., et al. (2013). Modelling and simulating of risk behaviours in virtual environments based on multi-agent and fuzzy logic. International Journal of Advanced Robotic System, 10, 1–14.

    Article  Google Scholar 

  • Chen, L., Liu, F., Zhao, Y., Wang, W., Yuan, X., & Zhu, J. (2020). VALID: A comprehensive virtual aerial image dataset. In 2020 IEEE international conference on robotics and automation (ICRA) (pp. 2009–2016). IEEE.

  • Chen, X., Zhao, N., He, G., et al. (2006). Virtual human animation in networked physical running fitness system. In International conference on artificial reality and telexistence—workshops (pp. 47–51). IEEE Computer Society.

  • Falk, V. (2002). Manual control and tracking—A human factor analysis relevant for beating heart surgery. Annals of Thoracic Surgery, 74(2), 624–628.

    Article  MathSciNet  Google Scholar 

  • Gao, Z. (2012). Study and implementation on somatosensory interaction in 3D virtual environments. In 2012 IEEE symposium on electrical and electronics engineering (EEESYM) (pp. 281–284). IEEE.

  • Garbaya, S., & Zaldivar-Colado, U. (2007). The affect of contact force sensations on user performance in virtual assembly tasks. Virtual Reality, 11(4), 287–299.

    Article  Google Scholar 

  • Hahn, H., Meyer-Nieberg, S., & Pickl, S. (2009). Electric load forecasting methods: Tools for decision making. European Journal of Operational Research, 199, 902–907.

    Article  MATH  Google Scholar 

  • Heer, J., & Shneiderman, B. (2012). Interactive dynamics for visual analysis. Communications of the ACM, 55(4), 45–54.

    Article  Google Scholar 

  • Hou, W. (2010). Research on some key technologies of intelligent virtual assembly system based on context awareness. Beijing University of Posts and Telecommunications.

  • Jiang, J., Chen, Z., & He, K. (2013). Construction of parent–child relationship of topological elements in feature CAD model. Journal of Computer Aided Design and Graphics, 25(03), 417–424.

    Google Scholar 

  • Leitner, J., & Leopold-Wildburger, U. (2011). Experiments on forecasting behavior with several sources of information—A review of the literature. European Journal of Operational Research, 213(3), 459–469.

    Article  MathSciNet  Google Scholar 

  • Liu, Z. (2002). Research on theory, method and application of product assembly modeling in process and history oriented virtual environment. Zhejiang University.

  • Mohammadi, M., Eskola, R., & Mikkola, A. (2020). Constructing a virtual environment for multibody simulation software using photogrammetry. Applied Sciences, 10(12), 4079.

    Article  Google Scholar 

  • Pei, W., Shang, W., Liang, C., Jiang, X., Huang, C., & Yong, Q. (2020). Using lignin as the precursor to synthesize Fe3O4@ lignin composite for preparing electromagnetic wave absorbing lignin–phenol–formaldehyde adhesive. Industrial Crops and Products, 154, 112638.

    Article  Google Scholar 

  • Plouffe, G., & Cretu, A. (2016). Static and dynamic hand gesture recognition in depth data using dynamic time warping. IEEE Transactions on Instrumentation and Measurement, 65(2), 305–316.

    Article  Google Scholar 

  • Sayed, U, Mofaddel, M. A., Bakheet, S, & El-Zohry, Z. (2018a). Human hand gesture recognition. Information Science Letters, 7(3), 41–45. https://doi.org/10.18576/isl/070301

    Article  Google Scholar 

  • Sayed, U., Mofaddel, M. A.., Bakheet, S., & El-Zohry, Z. (2018b). An elliptical boundary skin model for hand detection based on HSV color space. Information Science Letters, 7(1), 13–17. https://doi.org/10.18576/isl/070103

    Article  Google Scholar 

  • Skorupa, J. (1994). Virtual fitness. Popular Mechanics, 171(10), 42.

    Google Scholar 

  • Swarna Parvathi, S., Easwarakumar, K. S., Devi, N., & Das, R. (2015). A bandwidth efficient Pareto minimal, approach for gesture based video streaming. Applied Mathematics and Information Sciences, 9(6), 3263–3279. https://doi.org/10.12785/amis/090654

    Article  MathSciNet  Google Scholar 

  • Wang, X., Wu, K., & Cheng, Y. (2013). Research on virtual 3D station based on images. Applied Mathematics and Information Sciences, 7(1L), 225–231.

    Article  Google Scholar 

  • Wangpattarapong, K., Maneewan, S., Ketjoy, N., & Rakwichian, W. (2008). The impacts of climatic and economic factors on residential electricity consumption of Bangkok Metropolis. Energy and Buildings, 40, 1419–1425.

    Article  Google Scholar 

  • Xu, W., & Lee, E.-J. (2012). Continuous gesture trajectory recognition system based on computer vision. International Journal of Applied Mathematics and Information Sciences, 6(2), 339–346.

    Google Scholar 

  • Yamaguchi, M., & Higashida, R. (2016). 3D touchable holographic light-field display. Applied Optics, 55(3), A178.

    Article  Google Scholar 

  • Yang, H., Chen, J., Wang, C., Cui, J., & Wei, W. (2020). Intelligent planning of product assembly sequences based on spatio-temporal semantic knowledge. Assembly Automation. https://doi.org/10.1108/AA-11-2018-0196

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kunyu Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, K., Li, X. AI driven human–computer interaction design framework of virtual environment based on comprehensive semantic data analysis with feature extraction. Int J Speech Technol 25, 863–877 (2022). https://doi.org/10.1007/s10772-021-09954-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10772-021-09954-5

Keywords

Navigation