Abstract
The growing development of Mixed Reality (MR) simulators in diverse areas such as vehicle operation, sports, and healthcare underscore the importance of high-quality immersion. Currently, evaluating the immersion quality of these simulators relies solely on post-simulation feedback from users, typically gathered through subjective questionnaires. However, this method fails to capture essential data for estimating the user's self-motion perception, which is crucial for optimizing immersion non-empirically. To address this gap, we propose a human homothetic perception framework. This framework aims to replicate human self-motion perception capabilities within a multimodal robotic system in MR environments. It incorporates a hybrid Gough-Stewart platform with a human-substitute NAO robot. This setup aims to recreate motion perception by integrating visual and vestibular information in a human perception model. We demonstrate this framework using trajectories from a Unity-based virtual ski simulator, designed to rehabilitate individuals with disabilities. The quality of immersion is evaluated by comparing the motion perception errors of the human model against the virtual trajectories. This innovative approach offers the first calibration tool for MR applications, eliminating the need for human feedback.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Sandoval-Gonzalez, O., et al.: Design and development of a hand exoskeleton robot for active and passive rehabilitation. Int. J. Adv. Robot. Syst. 13(2), 66 (2016)
Ambrosini, E., et al.: The combined action of a passive exoskeleton and an EMG-controlled neuroprosthesis for upper limb stroke rehabilitation: first results of the retrainer project. In: 2017 International Conference on Rehabilitation Robotics (ICORR), pp. 56–61. IEEE (2017)
Vailland, G., et al.: Power wheelchair virtual reality simulator with vestibular feedback. Modell. Meas. Control C 81, 35–42 (2020)
Vailland, G., et al.: Vestibular feedback on a virtual reality wheelchair driving simulator: a pilot study. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 171–179 (2020)
Burgar, T.C.G., Lum, P.S., Shor, P.C., Van der Loos, H.M., et al.: Development of robots for rehabilitation therapy: the Palo Altova/Stanford experience. J. Rehabil. Res. Dev. 37(6), 663–674 (2000)
Calabro, R.S., et al.: Robotic gait training in multiple sclerosis rehabilitation: can virtual reality make the difference? Findings from a randomized controlled trial. J. Neurol. Sci. 377, 25–30 (2017)
Ma, W., Zhang, X., Yin, G.: Design on intelligent perception system for lower limb rehabilitation exoskeleton robot. In: 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 587–592. IEEE (2016)
Boy, G.A.: Cognitive Function Analysis, vol. 2. Greenwood Publishing Group, Westport (1998)
Alshaer, A., Regenbrecht, H., O’Hare, D.: Immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator. Appl. Ergon. 58, 1–12 (2017)
Vailland, G., Gaffary, Y., Devigne, L., Gouranton, V., Arnaldi, B., Babel, M.: Vestibular feedback on a virtual reality wheelchair driving simulator: a pilot study. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 171–179 (2020)
Hock, P., Colley, M., Askari, A., Wagner, T., Baumann, M., Rukzio, E.: Introducing vampire–using kinesthetic feedback in virtual reality for automated driving experiments. In: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 204–214 (2022)
Paulich, M., Schepers, M., Rudigkeit, N., Bellusci, G.: Xsens MTw Awinda: miniature wireless inertial-magnetic motion tracker for highly accurate 3d kinematic applications, pp. 1–9. Xsens, Enschede, The Netherlands (2018)
Furgale, P., Rehder, J., Siegwart, R.: Unified temporal and spatial calibration for multi-sensor systems. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1280–1286. IEEE (2013)
Campos, C., Elvira, R., RodrÃguez, J.J.G., Montiel, J.M., Tardos, J.D.: ORB-SLAM3: an accurate open-source library for visual, visual–inertial, and multimap SLAM. IEEE Trans. Robot. 37(6), 1874–1890 (2021)
Bescos, B., Facil, J.M., Civera, J., Neira, J.: DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)
Beghdadi, A., Mallem, M., Beji, L.: D2SLAM: semantic visual slam based on the influence of depth for dynamic environments, arXiv preprint arXiv:2210.08647 (2022)
Hosman, R., Cardullo, F., Bos, J.: Visual-vestibular interaction in motion perception. In: AIAA Modeling and Simulation Technologies Conference, p. 6425 (2011)
Heerspink, Berkouwer, W., Stroosma, O., van Paassen, R., Mulder, M., Mulder, B.: Evaluation of vestibular thresholds for motion detection in the Simona research simulator. In: AIAA Modeling and Simulation Technologies Conference and Exhibit, p. 6502 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Houda, T., Amouri, A., Beghdadi, A., Beji, L. (2024). Multi-robot Interaction with Mixed Reality for Enhanced Perception. In: Hamlich, M., Dornaika, F., Ordonez, C., Bellatreche, L., Moutachaouik, H. (eds) Smart Applications and Data Analysis. SADASC 2024. Communications in Computer and Information Science, vol 2168. Springer, Cham. https://doi.org/10.1007/978-3-031-77043-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-77043-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-77042-5
Online ISBN: 978-3-031-77043-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)