Skip to main content

Now Look Here! \(\Downarrow \) Mixed Reality Improves Robot Communication Without Cognitive Overload

  • Conference paper
  • First Online:
Virtual, Augmented and Mixed Reality (HCII 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14027))

Included in the following conference series:

Abstract

Recently, researchers have initiated a new wave of convergent research in which Mixed Reality visualizations enable new modalities of human-robot communication, including Mixed Reality Deictic Gestures (MRDGs) – the use of visualizations like virtual arms or arrows to serve the same purpose as traditional physical deictic gestures. But while researchers have demonstrated a variety of benefits to these gestures, it is unclear whether the success of these gestures depends on a user’s level and type of cognitive load. We explore this question through an experiment grounded in rich theories of cognitive resources, attention, and multi-tasking, with significant inspiration drawn from Multiple Resource Theory. Our results suggest that MRDGs provide task-oriented benefits regardless of cognitive load, but only when paired with complex language. These results suggest that designers can pair rich referring expressions with MRDGs without fear of cognitively overloading their users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    These block colors were chosen for consistent visual processing, as blue is processed differently within the eye due to spatial and frequency differences of cones between red/green and blue. This did mean that our task was not accessible to red/green colorblind participants, requiring us to exclude data from colorblind participants.

References

  1. Amor, H.B., Ganesan, R.K., Rathore, Y., Ross, H.: Intention projection for human-robot collaboration with mixed reality cues. In: International WS on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)

    Google Scholar 

  2. Andersen, R.S., Madsen, O., Moeslund, T.B., Amor, H.B.: Projecting robot intentions into human environments. In: International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 294–301 (2016)

    Google Scholar 

  3. Azuma, R.: A survey of augmented reality. Presence: Teleoperators Virtual Environ. 6, 355–385 (1997)

    Article  Google Scholar 

  4. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent advances in augmented reality. IEEE Comput. Graph. Appl. 21, 34–47 (2001)

    Article  Google Scholar 

  5. Billinghurst, M., Clark, A., Lee, G., et al.: A survey of augmented reality. Found. Trends® Hum.-Comput. Interact. 8(2–3), 73–272 (2015)

    Google Scholar 

  6. Meyer zu Borgsen, S., Renner, P., Lier, F., Pfeiffer, T., Wachsmuth, S.: Improving human-robot handover research by mixed reality techniques. In: International WS on Virtual, Aug. and Mixed Reality for Human-Robot Interaction (VAM-HRI) (2018)

    Google Scholar 

  7. Brown, L., et al.: Best of both worlds? Combining different forms of mixed reality deictic gestures. ACM Trans. Hum.-Robot Interact. 12, 1–23 (2022)

    Article  Google Scholar 

  8. Chakraborti, T., Sreedharan, S., Kulkarni, A., Kambhampati, S.: Alternative modes of interaction in proximal human-in-the-loop operation of robots. arXiv preprint arXiv:1703.08930 (2017)

  9. Cheli, M., Sinapov, J., Danahy, E.E., Rogers, C.: Towards an augmented reality framework for k-12 robotics education. In: International WS on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)

    Google Scholar 

  10. Crowder, M.J.: Analysis of Repeated Measures. Routledge, Milton Park (2017)

    Book  Google Scholar 

  11. Dudley, A., Chakraborti, T., Kambhampati, S.: V2V communication for augmenting reality enabled smart huds to increase situational awareness of drivers (2018)

    Google Scholar 

  12. Frank, J.A., Moorhead, M., Kapila, V.: Mobile mixed-reality interfaces that enhance human-robot interaction in shared spaces. Front. Rob. AI 4, 20 (2017)

    Article  Google Scholar 

  13. Ganesan, R.K., Rathore, Y.K., Ross, H.M., Amor, H.B.: Better teaming through visual cues: how projecting imagery in a workspace can improve human-robot collaboration. IEEE Robot. Autom. Mag. 25(2), 59–71 (2018)

    Article  Google Scholar 

  14. Goktan, I., Ly, K., Groechel, T.R., Mataric, M.: Augmented reality appendages for robots: design considerations and recommendations for maximizing social and functional perception. In: International WS on Virtual, Augmented and Mixed Reality for HRI (2022)

    Google Scholar 

  15. Green, S.A., Billinghurst, M., Chen, X., Chase, J.G.: Human-robot collaboration: a literature review and augmented reality approach in design. Int. J. Adv. Robot. Syst. 5(1), 1 (2008)

    Article  Google Scholar 

  16. Groechel, T., Shi, Z., Pakkar, R., Matarić, M.J.: Using socially expressive mixed reality arms for enhancing low-expressivity robots. In: International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–8. IEEE (2019)

    Google Scholar 

  17. Groechel, T.R., Walker, M.E., Chang, C.T., Rosen, E., Forde, J.Z.: Tokcs: tool for organizing key characteristics of VAM-HRI systems. Rob. Autom. Mag. (2021)

    Google Scholar 

  18. Hamilton, J., Phung, T., Tran, N., Williams, T.: What’s the point? Tradeoffs between effectiveness and social perception when using mixed reality to enhance gesturally limited robots. In: Proceedings of the HRI (2021)

    Google Scholar 

  19. Hamilton, J., Tran, N., Williams, T.: Tradeoffs between effectiveness and social perception when using mixed reality to supplement gesturally limited robots. In: International WS on Virtual, Augmented, and Mixed Reality for HRI (2020)

    Google Scholar 

  20. Han, Z., Zhu, Y., Phan, A., Garza, F.S., Castro, A., Williams, T.: Crossing reality: comparing physical and virtual robot deixis. In: International Conference HRI (2023)

    Google Scholar 

  21. Hart, S., Staveland, L.: Development of NASA-TLX (task load index): results of empirical and theoretical research, pp. pp 139–183. Amsterdam (1988)

    Google Scholar 

  22. Hedayati, H., Walker, M., Szafir, D.: Improving collocated robot teleoperation with augmented reality. In: International Conference on Human-Robot Interaction (2018)

    Google Scholar 

  23. Hirshfield, L., Williams, T., Sommer, N., Grant, T., Gursoy, S.V.: Workload-driven modulation of mixed-reality robot-human communication. In: ICMI WS on Modeling Cognitive Processes from Multimodal Data, p. 3. ACM (2018)

    Google Scholar 

  24. Jeffreys, H.: Significance tests when several degrees of freedom arise simultaneously. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. (1938)

    Google Scholar 

  25. Kahneman, D.: Attention and effort (1973)

    Google Scholar 

  26. Lavie, N.: Perceptual load as a necessary condition for selective attention. J. Exp. Psych.: Hum. Percept. Perform. 21(3), 451 (1995)

    Google Scholar 

  27. Lavie, N.: The role of perceptual load in visual awareness. Brain Res. 1080, 91–100 (2006)

    Article  Google Scholar 

  28. MacDonald, W.: The impact of job demands and workload on stress and fatigue. Aust. Psychol. 38(2), 102–117 (2003)

    Article  Google Scholar 

  29. Mathôt, S.: Bayes like a baws: interpreting Bayesian repeated measures in JASP [blog post]. cogsci.nl/blog/interpreting-bayesian-repeated-measures-in-jasp (2017)

    Google Scholar 

  30. Matuszek, C., Bo, L., Zettlemoyer, L., Fox, D.: Learning from unscripted deictic gesture and language for human-robot interactions. In: AAAI (2014)

    Google Scholar 

  31. Mavridis, N.: A review of verbal and non-verbal human-robot interactive communication. Robot. Auton. Syst. 63, 22–35 (2015)

    Article  MathSciNet  Google Scholar 

  32. Milgram, P., Zhai, S., Drascic, D., Grodski, J.: Applications of augmented reality for human-robot communication. In: International Conference on Intelligent Robots and Systems (1993)

    Google Scholar 

  33. Morey, R., Rouder, J.: Bayesfactor (version 0.9. 9) (2014)

    Google Scholar 

  34. Navon, D., Gopher, D.: On the economy of the human-processing system. Psychol. Rev. 86(3), 214 (1979)

    Article  Google Scholar 

  35. Norman, D.A., Bobrow, D.G.: On data-limited and resource-limited processes. Cogn. Psychol. 7(1), 44–64 (1975)

    Article  Google Scholar 

  36. Peters, C., Yang, F., Saikia, H., Li, C., Skantze, G.: Towards the use of mixed reality for HRI design via virtual robots. In: International WS on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)

    Google Scholar 

  37. Rosen, E., et al.: Communicating robot arm motion intent through mixed reality head-mounted displays. In: Amato, N.M., Hager, G., Thomas, S., Torres-Torriti, M. (eds.) Robotics Research. SPAR, vol. 10, pp. 301–316. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-28619-4_26

    Chapter  Google Scholar 

  38. Rouder, J.N., Morey, R.D., Speckman, P.L., Province, J.M.: Default Bayes factors for ANOVA designs. J. Math. Psychol. 56(5), 356–374 (2012)

    Article  MathSciNet  Google Scholar 

  39. Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., Joublin, F.: To err is human (-like): effects of robot gesture on perceived anthropomorphism and likability. Int. J. Soc. Robot. 5(3), 313–323 (2013)

    Article  Google Scholar 

  40. Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., Joublin, F.: Generation and evaluation of communicative robot gesture. Int. J. Soc. Rob. 4(2) (2012)

    Google Scholar 

  41. Sanders, A.: Dual task performance (2001)

    Google Scholar 

  42. Sauppé, A., Mutlu, B.: Robot deictics: how gesture and context shape referential communication. In: International Conference on Human-Robot Interaction (HRI) (2014)

    Google Scholar 

  43. Schönheits, M., Krebs, F.: Embedding AR in industrial HRI applications. In: International WS on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)

    Google Scholar 

  44. Sibirtseva, E., et al.: A comparison of visualisation methods for disambiguating verbal requests in human-robot interaction. In: International Symposium on Robot and Human Interactive Communication (2018)

    Google Scholar 

  45. Siéroff, E.: Attention: multiple resources (2001)

    Google Scholar 

  46. Sportillo, D., Paljic, A., Ojeda, L., Partipilo, G., Fuchs, P., Roussarie, V.: Learn how to operate semi-autonomous vehicles with extended reality (2018)

    Google Scholar 

  47. Szafir, D.: Mediating human-robot interactions with virtual, augmented, and mixed reality. In: International Conference on Human-Computer Interaction (2019)

    Google Scholar 

  48. J Team: JASP (version 0.8.5.1) [computer software] (2018)

    Google Scholar 

  49. Tellex, S., Gopalan, N., Kress-Gazit, H., Matuszek, C.: Robots that use language. Ann. Rev. Control Robot. Auton. Syst. 3, 25–55 (2020)

    Article  Google Scholar 

  50. Van Krevelen, D., Poelman, R.: A survey of augmented reality technologies, applications and limitations. Int. J. Virtual Reality 9(2), 1–20 (2010)

    Article  Google Scholar 

  51. Wagenmakers, E., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J.: Bayesian inference for psychology, Part II: example applications with JASP. Psychon. Bull. Rev. 25(1), 35–57 (2018)

    Article  Google Scholar 

  52. Walker, M., Hedayati, H., Lee, J., Szafir, D.: Communicating robot motion intent with augmented reality. In: International Conference on Human-Robot Interaction (2018)

    Google Scholar 

  53. Walker, M., Phung, T., Chakraborti, T., Williams, T., Szafir, D.: Virtual, augmented, and mixed reality for human-robot interaction: a survey and virtual design element taxonomy (2022). https://arxiv.org/abs/2202.11249

  54. Weng, T., Perlmutter, L., Nikolaidis, S., Srinivasa, S., Cakmak, M.: Robot object referencing through legible situated projections. In: International Conference on Robotics and Automation (ICRA) (2019)

    Google Scholar 

  55. Westfall, P.H., Johnson, W.O., Utts, J.M.: A Bayesian perspective on the Bonferroni adjustment. Biometrika 84(2), 419–427 (1997)

    Article  MathSciNet  Google Scholar 

  56. Whelan, R.: Effective analysis of reaction time data. Psychol. Rec. 58(3), 475–482 (2008)

    Article  Google Scholar 

  57. Wickens, C.D.: Processing resources and attention. Multiple-task performance (1991)

    Google Scholar 

  58. Wickens, C.D.: Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 3(2), 159–177 (2002)

    Article  Google Scholar 

  59. Wickens, C.D.: Multiple resources and mental workload. Hum. Factor 50(3), 449–455 (2008)

    Article  Google Scholar 

  60. Wickens, C.D., Santamaria, A., Sebok, A.: A computational model of task overload management and task switching. In: Human Factors and Ergonomics Society Annual Meeting, vol. 57, pp. 763–767. SAGE Publications, Los Angeles (2013)

    Google Scholar 

  61. Wickens, C.D., Tsang, P.: Handbook of Human-Systems Integration. APA (2014)

    Google Scholar 

  62. Wickens, C.D., Vidulich, M., Sandry-Garza, D.: Principles of SCR compatibility with spatial and verbal tasks: the role of display-control location and voice-interactive display-control interfacing. Hum. Factors 26(5), 533–543 (1984)

    Article  Google Scholar 

  63. Williams, T., Bussing, M., Cabrol, S., Boyle, E., Tran, N.: Mixed reality deictic gesture for multi-modal robot communication. In: International Conference on HRI (2019)

    Google Scholar 

  64. Williams, T., Bussing, M., Cabrol, S., Lau, I., Boyle, E., Tran, N.: Investigating the potential effectiveness of allocentric mixed reality deictic gesture. In: International Conference on Virtual, Augmented, and Mixed Reality (2019)

    Google Scholar 

  65. Williams, T., Szafir, D., Chakraborti, T.: The reality-virtuality interaction cube. In: International WS on Virtual, Augmented, and Mixed Reality for HRI (2019)

    Google Scholar 

  66. Williams, T., Szafir, D., Chakraborti, T., Ben Amor, H.: Virtual, augmented, and mixed reality for human-robot interaction. In: International Conference on Human-Robot Interaction (LBRs), pp. 403–404. ACM (2018)

    Google Scholar 

  67. Williams, T., Tran, N., Rands, J., Dantam, N.T.: Augmented, mixed, and virtual reality enabling of robot deixis. In: Chen, J.Y.C., Fragomeni, G. (eds.) VAMR 2018. LNCS, vol. 10909, pp. 257–275. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91581-4_19

    Chapter  Google Scholar 

  68. Williams, T., Yazdani, F., Suresh, P., Scheutz, M., Beetz, M.: Dempster-Shafer theoretic resolution of referential ambiguity. Auton. Robots 43(2), 389–414 (2019)

    Article  Google Scholar 

  69. Zhou, F., Duh, H.B.L., Billinghurst, M.: Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: International Symposium on Mixed and Augmented Reality, pp. 193–202. IEEE (2008)

    Google Scholar 

Download references

Acknowledgement

This research was funded in part by NSF grants IIS-1909864 and CNS-1823245.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nhan Tran .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tran, N., Grant, T., Phung, T., Hirshfield, L., Wickens, C., Williams, T. (2023). Now Look Here! \(\Downarrow \) Mixed Reality Improves Robot Communication Without Cognitive Overload. In: Chen, J.Y.C., Fragomeni, G. (eds) Virtual, Augmented and Mixed Reality. HCII 2023. Lecture Notes in Computer Science, vol 14027. Springer, Cham. https://doi.org/10.1007/978-3-031-35634-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35634-6_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35633-9

  • Online ISBN: 978-3-031-35634-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics