Abstract
To provide flexible support ways and intelligent support contents for users in VR contexts, compared with the existing support ways of either single or combination of sensing functions, e.g., support of gesture, head or body movement. In our proposal, to provide flexible support functions conditioned on VR contexts or user’s feedbacks, we propose to use a semi-automatic selection of interactive supports. In modeling of semi-selection by user’s feedbacks and VR contexts, we propose to evaluate the performance by consideration of both intelligent AI evaluation, based on data of users’ performance in VR, and user’s initiative feedbacks. Furthermore, to provide customizable or personalized estimation in the VR support, we propose to apply the machine learning of self-supervised learning. Therefore, we are able to train or retrain estimation models with efficiency of low-cost of data works, including reduction of data-labeling cost or reuse of existing models. We require to evaluate the timing of applying selection or modification of support ways, the balance of ratios of automatics or user-initiative due to user preference or experiences or smoothness of VR contexts, and even user awareness or understanding, etc. Further, we require to evaluate the scale, numbers, size, and limitation of data or training that are needed for stable, accurate, and useful estimations of VR support.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ahuja, K., Islam, R., Parashar, V., Dey, K., Harrison, C., Goel, M.: EyeSpyVR: interactive eye sensing using off-the-shelf, smartphone-based VR headsets. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(2), 57:1–57:10 (2018)
Arabadzhiyska, E., Tursun, O.T., Myszkowski, K., Seidel, H., Didyk, P.: Saccade landing position prediction for gaze-contingent rendering. ACM Trans. Graph. 36(4), 50:1–50:12 (2017)
Azuma, R.T., Bishop, G.: A frequency-domain analysis of head-motion prediction. In: Mair, S.G., Cook, R. (eds.) Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1995, Los Angeles, CA, USA, 6–11 August 1995, pp. 401–408. ACM (1995)
Clarence, A., Knibbe, J., Cordeil, M., Wybrow, M.: Unscripted retargeting: reach prediction for haptic retargeting in virtual reality. In: IEEE Virtual Reality and 3D User Interfaces, VR 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 150–159. IEEE (2021)
Corcoran, P.M., Nanu, F., Petrescu, S., Bigioi, P.: Real-time eye gaze tracking for gaming design and consumer electronics systems. IEEE Trans. Consum. Electron. 58(2), 347–355 (2012)
Dohan, M., Mu, M.: Understanding user attention in VR using gaze controlled games. In: Hook, J., Stenton, P., Ursu, M.F., Schofield, G., Vatavu, R. (eds.) Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video, TVX 2019, Salford (Manchester), UK, 5–7 June 2019, pp. 167–173. ACM (2019)
Gamage, N.M., Ishtaweera, D., Weigel, M., Withana, A.: So predictable! Continuous 3D hand trajectory prediction in virtual reality. In: Nichols, J., Kumar, R., Nebeling, M. (eds.) UIST 2021: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, 10–14 October 2021, pp. 332–343. ACM (2021)
Gao, C., Zhang, X., Banerjee, S.: Conductive inkjet printed passive 2D trackpad for VR interaction. In: Shorey, R., Murty, R., Chen, Y.J., Jamieson, K. (eds.) Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, MobiCom 2018, New Delhi, India, 29 October–02 November 2018, pp. 83–98. ACM (2018)
Gül, S., et al.: Reproducibility companion paper: Kalman filter-based head motion prediction for cloud-based mixed reality. In: Shen, H.T., et al. (eds.) MM 2021: ACM Multimedia Conference, Virtual Event, China, 20–24 October 2021, pp. 3619–3621. ACM (2021)
Hahn, M., Krüger, L., Wöhler, C.: 3D action recognition and long-term prediction of human motion. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 23–32. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79547-6_3
Hedeshy, R., Kumar, C., Menges, R., Staab, S.: Hummer: text entry by gaze and hum. In: Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., Drucker, S.M. (eds.) CHI 2021: CHI Conference on Human Factors in Computing Systems, Virtual Event/Yokohama, Japan, 8–13 May 2021, pp. 741:1–741:11. ACM (2021)
Henrikson, R., Grossman, T., Trowbridge, S., Wigdor, D., Benko, H.: Head-coupled kinematic template matching: a prediction model for ray pointing in VR. In: Bernhaupt, R., et al. (eds.) CHI 2020: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020, pp. 1–14. ACM (2020)
Herman, L., Jurík, V., Stachon, Z., Vrbík, D., Russnák, J., Rezník, T.: Evaluation of user performance in interactive and static 3D maps. ISPRS Int. J. Geo Inf. 7(11), 415 (2018)
Houser, S., Okafor, I., Raghav, V., Yoganathan, A.: Flow visualization of the non-parallel jet-vortex interaction. J. Vis. 21(4), 533–542 (2018). https://doi.org/10.1007/s12650-018-0478-2
Humski, L., Pintar, D., Vranic, M.: Analysis of Facebook interaction as basis for synthetic expanded social graph generation. IEEE Access 7, 6622–6636 (2019)
Jang, J.R., Hsu, C., Lee, H.: Continuous HMM and its enhancement for singing/humming query retrieval. In: ISMIR 2005, 6th International Conference on Music Information Retrieval, London, UK, 11–15 September 2005, Proceedings, pp. 546–551 (2005)
Lank, E., Cheng, Y.N., Ruiz, J.: Endpoint prediction using motion kinematics. In: Rosson, M.B., Gilmore, D.J. (eds.) Proceedings of the 2007 Conference on Human Factors in Computing Systems, CHI 2007, San Jose, California, USA, 28 April–3 May 2007, pp. 637–646. ACM (2007)
Laville, V., et al.: Deriving stratified effects from joint models investigating gene-environment interactions. BMC Bioinform. 21(1), 251 (2020)
Majaranta, P., Bulling, A.: London
Markopoulos, E., Luimula, M., Ravyse, W., Ahtiainen, J., Aro-Heinilä, V.: Human computer interaction opportunities in hand tracking and finger recognition in ship engine room VR training. In: Markopoulos, E., Goonetilleke, R.S., Ho, A.G., Luximon, Y. (eds.) AHFE 2021. LNNS, vol. 276, pp. 343–351. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80094-9_41
Menges, R., Kumar, C., Staab, S.: Improving user experience of eye tracking-based interaction: introspecting and adapting interfaces. ACM Trans. Comput. Hum. Interact. 26(6), 37:1–37:46 (2019)
Murnane, M., Higgins, P., Saraf, M., Ferraro, F., Matuszek, C., Engel, D.: A simulator for human-robot interaction in virtual reality. In: IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VR Workshops 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 470–471. IEEE (2021)
Petersen, G.B., Petkakis, G., Makransky, G.: A study of how immersion and interactivity drive VR learning. Comput. Educ. 179, 104429 (2022)
Soro, A.: Gestures and cooperation: considering non verbal communication in the design of interactive spaces. Ph.D. thesis, University of Cagliari, Italy (2012)
Sprengel, U., et al.: Virtual embolization for treatment support of intracranial AVMs using an interactive desktop and VR application. Int. J. Comput. Assist. Radiol. Surg. 16(12), 2119–2127 (2021)
Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., Kalliris, G.: Subjective evaluation of a speech emotion recognition interaction framework. In: Cunningham, S., Picking, R. (eds.) Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, Wrexham, United Kingdom, 12–14 September 2018, pp. 34:1–34:7. ACM (2018)
Vu, T.H., Misra, A., Roy, Q., Choo, K.T.W., Lee, Y.: Smartwatch-based early gesture detection 8 trajectory tracking for interactive gesture-driven applications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(1), 39:1–39:27 (2018)
Wang, L., Wang, H., Dai, D., Leng, J., Han, X.: Bidirectional shadow rendering for interactive mixed 360\(^{\circ }\) videos. In: IEEE Virtual Reality and 3D User Interfaces, VR 2021, Lisbon, Portugal, 27 March–1 April 2021, pp. 170–178. IEEE (2021)
Wang, Z., Xie, L., Wei, H., Zhang, K., Zhang, J.: Omnidirectional motion input: the basis of natural interaction in room-scale virtual reality. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VR Workshops, Atlanta, GA, USA, 22–26 March 2020, pp. 699–700. IEEE (2020)
Wienrich, C., Gross, R., Kretschmer, F., Müller-Plath, G.: Developing and proving a framework for reaction time experiments in VR to objectively measure social interaction with virtual agents. In: Kiyokawa, K., Steinicke, F., Thomas, B.H., Welch, G. (eds.) 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018, Tuebingen/Reutlingen, Germany, 18–22 March 2018, pp. 191–198. IEEE Computer Society (2018)
Acknowledgement
This work was supported by Japan Science and Technology Agency (JST CREST: JPMJCR19F2, Research Representative: Prof. Yoichi Ochiai, University of Tsukuba, Japan), and by University of Tsukuba (Basic Research Support Program Type A).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Demonstration
Demonstration of prediction of user support using gazing in VR game: We showed a simple demo of AI support VR by using gazing, body language, and voice. The user was expected to support the shooting by AI using the analyzed gazing behaviors or head or hand movement during the VR game (Fig. 4).
Modeling of Self-supervised Learning
We also show a example of modeling self-supervised learning by using multi-factor data of users in VR contexts. In the model of Fig. 5, the given images from VR context can be trained to be suitable representation of features, thus the prediction of user supports can be updated or improved by a dataset of VR without additional data works on labeling or cleaning data.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mukherjee, R., Lu, JL., Ochiai, Y. (2022). Designing AI-Support VR by Self-supervised and Initiative Selective Supports. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. User and Context Diversity. HCII 2022. Lecture Notes in Computer Science, vol 13309. Springer, Cham. https://doi.org/10.1007/978-3-031-05039-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-05039-8_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05038-1
Online ISBN: 978-3-031-05039-8
eBook Packages: Computer ScienceComputer Science (R0)