Abstract
On-site human-robot collaboration is one form of lunar exploration, where astronauts work side-by-side with rovers to accomplish tasks. Mixed reality (MR) is being tried in astronaut-rover teams. While research is primarily focused on MR technology to facilitate natural human-robot interaction, it often neglects the importance of incorporating transparency into decision-making processes. Our proposal is to utilize shared MR and optimized perception-localization techniques for human-robot collaboration in lunar exploration. The establishment of consensus between astronauts and rovers during decision-making processes can be facilitated through shared spatial context and interactive holographic content. This technology combines the strengths of astronauts and rovers for decision-making during rover navigation missions. It avoids blindly relying on rovers or only using manual manipulation by astronauts. In order to improve terrain perception and facilitate visualization for astronaut-rover teams during lunar navigation tasks, we develop a risk-aware lunar terrain parsing method that utilizes multiscale eigenvalue-based features and an optimized Random Forest classifier. Our method outperforms others with an impressive accuracy of 94.2%. Our co-location MR system incorporates a marker & instance-based spatial anchor method, customized specifically for the unique topography of the lunar terrain and optimized for resource conservation. Our experiments on lunar navigation with three terrain conditions and four configurations confirm that that shared MR could improve task performance and reduce workload. The proposed shared MR paradigm and related technologies can provide a reference in future lunar exploration missions.
Similar content being viewed by others
Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Abercrombie, S.P., Menzies, A., Winter, A., Clausen, M., Duran, B., Jorritsma, M., Goddard, C., Lidawer, A.: Onsight: Multi-platform visualization of the surface of mars. In: AGU Fall Meeting Abstracts, vol. 2017, pp.11–0134 (2017)
Abiodun OI, Jantan A, Omolara AE, Dada KV, Umar AM, Linus OU, Arshad H, Kazaure AA, Gana U, Kiru MU (2019) Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 7:158820–158846
Ajoudani A, Zanchettin AM, Ivaldi S, Albu-Schäffer A, Kosuge K, Khatib O (2018) Progress and prospects of the human-robot collaboration. Autonomous Robots 42(5):957–975
Allan, M., Wong, U., Furlong, P.M., Rogg, A., McMichael, S., Welsh, T., Chen, I., Peters, S., Gerkey, B., Quigley, M., et al. Planetary rover simulation for lunar exploration missions. In: 2019 IEEE Aerospace Conference, pp.1–19 (2019). IEEE
Al-Sabbag ZA, Yeum CM, Narasimhan S (2022) Enabling human-machine collaboration in infrastructure inspections through mixed reality. Advanced Engineering Informatics 53:101709
Anandapadmanaban, E., Tannady, J., Norheim, J., Newman, D., Hoffman, J.: Holo-sextant: an augmented reality planetary EVA navigation interface. (2018). 48th International Conference on Environmental Systems
Apple: ARKit-ARWorldMap. https://developer.apple.com/documentation/arkit/arworldmap (2022)
Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.5297–5307 (2016)
Atik ME, Duran Z, Seker DZ (2021) Machine learning-based supervised classification of point clouds using multiscale geometric features. ISPRS International Journal of Geo-Information 10(3):187
Belobrajdic, B., Melone, K., Diaz-Artiles, A.: Planetary extravehicular activity (EVA) risk mitigation strategies for long-duration space missions. npj Microgravity 7(1), 1–9 (2021)
Biau G, Scornet E (2016) A random forest guided tour. Test 25(2):197–227
Boyd, A., Fortunato, A., Wolff, M., Oliveira, D.M.: mobiPV: A new, wearable real-time collaboration software for astronauts using mobile computing solutions. In: 14th International Conference on Space Operations, p.2306 (2016)
Breiman L (2001) Random forests. Machine learning 45(1):5–32
Bulatov D, Stütz D, Hacker J, Weinmann M (2021) Classification of airborne 3d point clouds regarding separation of vegetation in complex environments. Applied Optics 60(22):6–20
Burns JO, Mellinkoff B, Spydell M, Fong T, Kring DA, Pratt WD, Cichan T, Edwards CM (2019) Science on the lunar surface facilitated by low latency telerobotics from a lunar orbital platform-gateway. Acta Astronautica 154:195–203
Campos C, Elvira R, Rodríguez JJG, Montiel JM, Tardós JD (2021) Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam. IEEE Transactions on Robotics 37(6):1874–1890
Cao A, Chintamani KK, Pandya AK, Ellis RD (2009) NASA-TLX: Software for assessing subjective mental workload. Behavior research methods 41(1):113–117
Cardenas, I.S., Powlison, K., Kim, J.-H.: Reducing cognitive workload in telepresence lunar-martian environments through audiovisual feedback in augmented reality. In: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp.463–466 (2021)
Cervantes J, Garcia-Lamont F, Rodríguez-Mazahua L, Lopez A (2020) A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 408:189–215
Chghaf M, Rodriguez S, Ouardi AE (2022) Camera, LiDAR and multi-modal SLAM systems for autonomous ground vehicles: a survey. Journal of Intelligent & Robotic Systems 105(1):1–35
Christian JA, Derksen H, Watkins R (2021) Lunar crater identification in digital images. The Journal of the Astronautical Sciences 68(4):1056–1144
Connors MM, Eppler DB, Morrow DG (1994) Interviews with the Apollo lunar surface astronauts in support of planning for EVA systems design. Technical report, Ames Research Center
Delmerico J, Poranne R, Bogo F, Oleynikova H, Vollenweider E, Coros S, Nieto J, Pollefeys M (2022) Spatial computing and intuitive interaction: Bringing mixed reality and robotics together. IEEE Robotics & Automation Magazine 29(1):45–57
Douillard, B., Underwood, J., Kuntz, N., Vlaskine, V., Quadros, A., Morton, P., Frenkel, A.: On the segmentation of 3d LiDAR point clouds. In: 2011 IEEE International Conference on Robotics and Automation, pp.2798–2805 (2011). IEEE
Drury, J.L., Hestand, D., Yanco, H.A., Scholtz, J.: Design guidelines for improved human-robot interaction. In: CHI’04 Extended Abstracts on Human Factors in Computing Systems, pp.1540–1540 (2004)
Dube R, Cramariuc A, Dugas D, Sommer H, Dymczyk M, Nieto J, Siegwart R, Cadena C (2020) SegMap: Segment-based mapping and localization using data-driven descriptors. The International Journal of Robotics Research 39(2–3):339–355
Efron, B.: Bootstrap methods: another look at the jackknife annals of statistics 7: 1–26. View Article PubMed/NCBI Google Scholar 24 (1979)
Feigl, T., Porada, A., Steiner, S., Löffler, C., Mutschler, C., Philippsen, M.: Localization limitations of ARCore, ARKit, and HoloLens in dynamic large-scale industry environments. In: VISIGRAPP (1: GRAPP), pp.307–318 (2020)
Fong T, Rochlis Zumbado J, Currie N, Mishkin A, Akin DL (2013) Space telerobotics: unique challenges to human-robot collaboration in space. Reviews of Human Factors and Ergonomics 9(1):6–56
Frank JA, Moorhead M, Kapila V (2017) Mobile mixed-reality interfaces that enhance human-robot interaction in shared spaces. Frontiers in Robotics and AI 4:20
Frome, A., Huber, D., Kolluri, R., Bülow, T., Malik, J.: Recognizing objects in range data using regional point descriptors. In: European Conference on Computer Vision, pp.224–237 (2004). Springer
Garrido-Jurado S, Muñoz-Salinas R, Madrid-Cuevas FJ, Marín-Jiménez MJ (2014) Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47(6):2280–2292
Gelbart, M.A., Snoek, J., Adams, R.P.: Bayesian optimization with unknown constraints. arXiv preprint arXiv:1403.5607 (2014)
Google: ARCore Cloud Anchor. https://developers.google.com/ar/develop/cloud-anchors (2022)
Gou J, Ma H, Ou W, Zeng S, Rao Y, Yang H (2019) A generalized mean distance-based k-nearest neighbor classifier. Expert Systems with Applications 115:356–372
Guo Y, Wang H, Hu Q, Liu H, Liu L, Bennamoun M (2020) Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence 43(12):4338–4364
Hambuchen K, Marquez J, Fong T (2021) A review of NASA human-robot interaction in space. Current Robotics Reports 2(3):265–272
Holschuh, B., Newman, D.: Extravehicular Activity (EVA), pp.83–90. Springer, Cham (2021). https://doi.org/10.1007/978-3-319-12191-8_18
Hu L, Xiao J, Wang Y (2020) Efficient and automatic plane detection approach for 3-d rock mass point clouds. Multimedia Tools and Applications 79(1):839–864
Huang, X., Jiang, X., Yu, T., Yin, H.: Fractal-based lunar terrain surface modeling for the soft landing navigation. In: 2009 Second International Conference on Intelligent Computation Technology and Automation, vol. 2, pp.53–56 (2009). IEEE
Huang, J., You, S.: Point cloud labeling using 3d convolutional neural network. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp.2670–2675 (2016). IEEE
Imhof, B., Hogle, M., Davenport, B., Weiss, P., Urbina, D., Røyrvik, J., Vögele, T., Parro, V., Nottle, A.: Project Moonwalk: lessons learnt from testing human robot collaboration scenarios in a lunar and martian simulation. In: 69th International Astronautical Congress (IAC) (2017). IAC-17-F1. 2.3, Adelaide, SA
Jeff, D., Helen, O., Eric, V., Chris, S., Blake, A.: Azure Spatial Anchors Linux SDK ROS Wrapper. https://github.com/microsoft/azure_spatial_anchors_ros/wiki (2022)
Labbé M, Michaud F (2019) RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. Journal of Field Robotics 36(2):416–446
Lawin, F.J., Danelljan, M., Tosteberg, P., Bhat, G., Khan, F.S., Felsberg, M.: Deep projective 3d semantic segmentation. In: International Conference on Computer Analysis of Images and Patterns, pp.95–107 (2017). Springer
Lee, P., McKay, C., Quinn, G., Chase, T., Tamuly, M., Tagestad, S., Pettersen, H., Arveng, M., Oygard, F., Dotson, B., et al. Astronaut smart glove: A human-machine interface for the exploration of the moon, mars, and beyond. In: 2020 International Conference on Environmental Systems (2020)
Lee D, Shim W, Lee M, Lee S, Jung K-D, Kwon S (2021) Performance evaluation of ground AR anchor with WebXR device API. Applied Sciences 11(17):7877
Li S-S (2020) An improved DBSCAN algorithm based on the neighbor similarity and fast nearest neighbor query. IEEE Access 8:47468–47476
Luo, J., Cai, J., Li, T., Su, Y.: Design principles and strategies of interface in extra vehicular activity spacesuit. In: International Conference on Applied Human Factors and Ergonomics, pp.334–342 (2020). Springer
McGill, M., Gugenheimer, J., Freeman, E.: A quest for co-located mixed reality: Aligning and assessing slam tracking for same-space multi-user experiences. In: 26th ACM Symposium on Virtual Reality Software and Technology, pp.1–10 (2020)
McHenry, N., Brady, L., Vives-Cortes, J., Cana, E., Gomez, I., Carrera, M., Mayorga, K., Mustafa, J., Chamitoff, G., Diaz-Artiles, A.: Adaptive navigation for lunar surface operations using deep learning and holographic telepresence. In: 2022 IEEE Aerospace Conference (AERO), pp.1–8 (2022). IEEE
McHenry, N., Davis, L., Gomez, I., Coute, N., Roehrs, N., Villagran, C., Chamitoff, G.E., Diaz-Artiles, A.: Design of an AR visor display system for extravehicular activity operations. In: 2020 IEEE Aerospace Conference, pp.1–11 (2020). IEEE
Microsoft: Azure Spatial Anchors overview. https://docs.microsoft.com/en-gb/azure/spatial-anchors/overview (2022)
Microsoft: Introducing the HoloLens 2 Development Edition. https://www.microsoft.com/en-us/hololens (2022)
Miller, L.S., Fornito, M.J., Flanagan, R., Kobrick, R.L.: Development of an augmented reality interface to aid astronauts in extravehicular activities. In: 2021 IEEE Aerospace Conference (50100), pp.1–12 (2021). IEEE
Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: Dense tracking and mapping in real-time. In: 2011 International Conference on Computer Vision, pp.2320–2327 (2011). IEEE
Onime, C., Uhomoibhi, J., Wang, H., Santachiara, M.: A reclassification of markers for mixed reality environments. The International Journal of Information and Learning Technology (2020)
Ono, M., Fuchs, T.J., Steffy, A., Maimone, M., Yen, J.: Risk-aware planetary rover operation: Autonomous terrain classification and path planning. In: 2015 IEEE Aerospace Conference, pp.1–10 (2015). IEEE
Ono, M., Rothrock, B., Otsu, K., Higa, S., Iwashita, Y., Didier, A., Islam, T., Laporte, C., Sun, V., Stack, K., et al. MAARS: machine learning-based analytics for automated rover systems. In: 2020 IEEE Aerospace Conference, pp.1–17 (2020). IEEE
Pelanis E, Kumar RP, Aghayan DL, Palomar R, Fretland ÅA, Brun H, Elle OJ, Edwin B (2020) Use of mixed reality for improved spatial understanding of liver anatomy. Minimally Invasive Therapy & Allied Technologies 29(3):154–160
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.652–660 (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems 30 (2017)
Qiao D, Liu G, Li W, Lyu T, Zhang J (2022) Automated full scene parsing for marine ASVs using monocular vision. Journal of Intelligent & Robotic Systems 104(2):1–20
Rozenberszki, D., Sörös, G.: Towards universal user interfaces for mobile robots. In: Augmented Humans Conference 2021, pp.274–276 (2021)
Rydvanskiy R, Hedley N (2021) Mixed reality flood visualizations: reflections on development and usability of current systems. ISPRS International Journal of Geo-Information 10(2):82
Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems 25 (2012)
Song Y-Y, Ying L (2015) Decision tree methods: applications for classification and prediction. Shanghai archives of psychiatry 27(2):130
Suzuki, R., Karim, A., Xia, T., Hedayati, H., Marquardt, N.: Augmented reality and robotics: A survey and taxonomy for AR-enhanced human-robot interaction and robotic interfaces. In: CHI Conference on Human Factors in Computing Systems, pp.1–33 (2022)
Torr PH, Zisserman A (2000) Mlesac: A new robust estimator with application to estimating image geometry. Computer vision and image understanding 78(1):138–156
Uland, W., Ara, N., Larry, E., Xavier, B.: Polar Optical Lunar Analog Reconstruction (POLAR) Stereo Dataset. NASA Ames Research Center (2017)
Wang J, Cheng W, Zhou C, Zheng X (2017) Automatic mapping of lunar landforms using dem-derived geomorphometric parameters. Journal of geographical Sciences 27(11):1413–1427
Wang B, Lan J, Gao J (2022) LiDAR filtering in 3d object detection based on improved RANSAC. Remote Sensing 14(9):2110
Weinmann, M., Jäger, M.A., Wursthorn, S., Jutzi, B., Hübner, P.: 3d indoor mapping with the Microsoft HoloLens: Qualitative and quantitative evaluation by means of geometric features. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences 5(1) (2020)
Weinmann M, Jutzi B, Hinz S, Mallet C (2015) Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS Journal of Photogrammetry and Remote Sensing 105:286–304
Willis KS, Hölscher C, Wilbertz G, Li C (2009) A comparison of spatial knowledge acquisition with maps and mobile maps. Computers, Environment and Urban Systems 33(2):100–110
Winter, M., Rubio, S., Lancaster, R., Barclay, C., Silva, N., Nye, B., Bora, L.: Detailed description of the high-level autonomy functionalities developed for the ExoMars rover. In: Proceedings of the 14th Symposium on Advanced Space Technologies in Robotics and Automation, Leiden, pp.20–22 (2017)
Wong, C., Yang, E., Yan, X.-T., Gu, D.: Adaptive and intelligent navigation of autonomous planetary rovers-a survey. In: 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), pp.237–244 (2017). IEEE
Wonsick M, Long P, Önol AÖ, Wang M, Padır T (2021) A holistic approach to human-supervised humanoid robot operations in extreme environments. Frontiers in Robotics and AI 8:148
Zhang, H., Zhang, C., Yang, W., Chen, C.-Y.: Localization and navigation using QR code for mobile robot in indoor environment. In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp.2501–2506 (2015). IEEE
Zhao, J., Li, C., Tian, L., Zhu, J.: FPFH-based graph matching for 3d point cloud registration. In: Tenth International Conference on Machine Vision (ICMV 2017), vol. 10696, pp.143–155 (2018). SPIE
Zhong, Y.: Intrinsic shape signatures: A shape descriptor for 3d object recognition. In: 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp.689–696 (2009). IEEE
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
This work was supported by Shanghai Aerospace System Engineering Institute (Grant numbers: XJ022051600528). The authors have no relevant financial or non-financial interests to disclose. All authors read and approved the final manuscript. Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ji, H., Li, S., Chen, J. et al. On-site human-robot collaboration for lunar exploration based on shared mixed reality. Multimed Tools Appl 83, 18235–18260 (2024). https://doi.org/10.1007/s11042-023-16178-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-16178-z