skip to main content
10.1145/3434073.3444658acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Smooth Operator: Tuning Robot Perception Through Artificial Movement Sound

Published:08 March 2021Publication History

ABSTRACT

Can we influence how a robot is perceived by designing the sound of its movement? Drawing from practices in film sound, we overlaid a video depicting a robot's movement routine with three types of artificial movement sound. In a between-subject study design, participants saw either one of the three designs or a quiet control condition and rated the robot's movement quality, safety, capability, and attractiveness. We found that, compared to our control, the sound designs both increased and decreased perceived movement quality. Coupling the same robotic movement with different sounds lead to the motions being rated as more or less precise, elegant, jerky, or uncontrolled, among others. We further found that the sound conditions decreased perceived safety, and did not affect perceived capability and attractiveness. More unrealistic sound conditions led to larger differences in ratings, while the subtle addition of harmonic material was not rated differently to the control condition in any of the measures. Based on these findings, we discuss the challenges and opportunities regarding the use of artificial movement sound as an implicit channel of communication that may eventually be able to selectively target specific characteristics, helping designers in creating more refined and nuanced human-robot interactions.

Skip Supplemental Material Section

Supplemental Material

hrifp1113vf.mp4

mp4

73.8 MB

References

  1. Hervé Abdi and Lynne J Williams. 2010. Tukey's honestly significant difference (HSD) test. Encyclopedia of research design, Vol. 3 (2010).Google ScholarGoogle Scholar
  2. Almohannad Albastaki, Marius Hoggenmueller, Frederic Anthony Robinson, and Luke Hespanhol. 2020. Augmenting Remote Interviews through Virtual Experience Prototypes. In 32nd Australian Conference on Human-Computer Interaction (OzCHI '20), Sydney, NSW, Australia. ACM, New York, NY, USA, 9. https://doi.org/10.1145/3441000.3441057 ZSCC: 0000000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Barry Arons. 1992. A review of the cocktail party effect. Journal of the American Voice I/O Society, Vol. 12, 7 (1992).Google ScholarGoogle Scholar
  4. Jon Bellona, Lin Bai, Luke Dahl, and Amy LaViers. 2017. Empirically Informed Sound Synthesis Application for Enhancing the Perception of Expressive Robotic Movement. In Proceedings of the 23rd International Conference on Auditory Display - ICAD 2017. The International Community for Auditory Display, University Park Campus. https://doi.org/10.21785/icad2017.049Google ScholarGoogle ScholarCross RefCross Ref
  5. Bérenger Bramas, Young-Min Kim, and Dong-Soo Kwon. 2008. Design of a sound system to increase emotional expression impact in human-robot interaction. In 2008 International Conference on Control, Automation and Systems. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  6. Cynthia Breazeal, Kerstin Dautenhahn, and Takayuki Kanda. 2016. Social Robotics. In Springer Handbook of Robotics,, Bruno Siciliano and Oussama Khatib (Eds.). Springer International Publishing, Cham, 1935--1972. https://doi.org/10.1007/978-3-319-32552-1_72Google ScholarGoogle Scholar
  7. C. Breazeal, C.D. Kidd, A.L. Thomaz, G. Hoffman, and M. Berlin. 2005. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Edmonton, Alta., Canada. https://doi.org/10.1109/IROS.2005.1545011Google ScholarGoogle ScholarCross RefCross Ref
  8. Elizabeth Cha, Anca D. Dragan, and Siddhartha S. Srinivasa. 2015. Perceived robot capability. In 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Kobe, Japan, 541--548. https://doi.org/10.1109/ROMAN.2015.7333656Google ScholarGoogle Scholar
  9. Elizabeth Cha, Naomi T. Fitter, Yunkyung Kim, Terrence Fong, and Maja J. Matari?. 2018. Effects of Robot Sound on Auditory Localization in Human-Robot Collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18. ACM Press, Chicago, IL, USA. https://doi.org/10.1145/3171221.3171285Google ScholarGoogle Scholar
  10. Kuan-Ta Chen, Chen-Chi Wu, Yu-Chun Chang, and Chin-Laung Lei. 2009. A crowdsourceable QoE evaluation framework for multimedia content. In Proceedings of the 17th ACM international conference on Multimedia.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R Core Development Team and others. 2016. R: A language and environment for statistical computing. R foundation for statistical computing Vienna, Austria.Google ScholarGoogle Scholar
  12. Luke Dahl, Jon Bellona, Lin Bai, and Amy LaViers. 2017. Data-Driven Design of Sound for Enhancing the Perception of Expressive Robotic Movement. In Proceedings of the 4th International Conference on Movement Computing - MOCO '17. ACM Press, London, United Kingdom. https://doi.org/10.1145/3077981.3078047Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Alfred O Effenberg. 2005. Movement sonification: Effects on perception and action. IEEE multimedia, Vol. 12, 2 (2005), 53--59.Google ScholarGoogle Scholar
  14. Bruno Falissard. 2012. psy: Various procedures used in psychometry. R package version, Vol. 1 (2012).Google ScholarGoogle Scholar
  15. Michael Filimowicz. 2020. Foundations in Sound Design for Linear Media: A Multidisciplinary Approach. Routledge.Google ScholarGoogle Scholar
  16. John Fox, Gregor Gorjanc Friendly, Spencer Graves, Richard Heiberger, Georges Monette, Henric Nilsson, Brian Ripley, Sanford Weisberg, Maintainer John Fox, and MASS Suggests. 2007. The car package. R Foundation for Statistical Computing (2007).Google ScholarGoogle Scholar
  17. Emma Frid, Roberto Bresin, and Simon Alexanderson. 2018. Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot-Implications for Movement Sonification of Humanoids. In Sound and Music Computing.Google ScholarGoogle Scholar
  18. William Gaver. 2012. What should we expect from research through design?. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12. ACM Press, Austin, Texas, USA. https://doi.org/10.1145/2207676.2208538Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Chin-Chang Ho and Karl F MacDorman. 2010. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior, Vol. 26, 6 (2010).Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Guy Hoffman and Wendy Ju. 2014. Designing Robots With Movement in Mind . Journal of Human-Robot Interaction, Vol. 3, 1 (March 2014). https://doi.org/10.5898/JHRI.3.1.HoffmanGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  21. Marius Hoggenmueller, Luke Hespanhol, and Martin Tomitsch. 2020 a. Stop and Smell the Chalk Flowers: A Robotic Probe for Investigating Urban Interaction with Physicalised Displays. (2020).Google ScholarGoogle Scholar
  22. Marius Hoggenmueller, Wen-Ying Lee, Luke Hespanhol, Martin Tomitsch, and Malte Jung. 2020 b. Beyond the Robotic Artefact: Capturing Designerly HRI Knowledge through Annotated Portfolios. (2020).Google ScholarGoogle Scholar
  23. Kaoru Inoue, Kazuyoshi Wada, and Yuko Ito. 2008. Effective application of Paro: Seal type robots for disabled people in according to ideas of occupational therapists. In International Conference on Computers for Handicapped Persons. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Gunnar Johannsen. 2001. Auditory Displays in Human--Machine Interfaces of Mobile Robots for Non-Speech Communication with Humans. Journal of Intelligent and Robotic Systems, Vol. 32, 2 (2001).Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Gunnar Johannsen. 2002. Auditory display of directions and states for mobile systems. Georgia Institute of Technology.Google ScholarGoogle Scholar
  26. Gunnar Johannsen. 2004. Auditory displays in human-machine interfaces. Proc. IEEE, Vol. 92, 4 (2004).Google ScholarGoogle Scholar
  27. Patrik Jonell, Taras Kucherenko, Ilaria Torre, and Jonas Beskow. 2020. Can we trust online crowdworkers? Comparing online and offline participants in a preference test of virtual agents. arXiv preprint arXiv:2009.10760 (2020).Google ScholarGoogle Scholar
  28. Veikko Jousmäki and Riitta Hari. 1998. Parchment-skin illusion: sound-biased touch. Current biology, Vol. 8, 6 (1998), R190--R191.Google ScholarGoogle Scholar
  29. Takanori Komatsu and Seiji Yamada. 2011. How Does the Agents' Appearance Affect Users' Interpretation of the Agents' Attitudes: Experimental Investigation on Expressing the Same Artificial Sounds From Agents With Different Appearances. International Journal of Human-Computer Interaction, Vol. 27, 3 (Feb. 2011). https://doi.org/10.1080/10447318.2011.537209Google ScholarGoogle ScholarCross RefCross Ref
  30. Gregory Kramer, Bruce Walker, Terri Bonebright, Perry Cook, John H Flowers, Nadine Miner, and John Neuhoff. 2010. Sonification report: Status of the field and research agenda. (2010).Google ScholarGoogle Scholar
  31. Thierry Lageat, Sandor Czellar, and Gilles Laurent. 2003. Engineering hedonic attributes to generate perceptions of luxury: Consumer perception of an everyday sound. Marketing Letters, Vol. 14, 2 (2003).Google ScholarGoogle ScholarCross RefCross Ref
  32. Adrian Benigno Latupeirissa and Roberto Bresin. 2020. Understanding non-verbal sound of humanoid robots in films. In Workshop on Mental Models of Robots at HRI 2020 in Cambridge, UK.Google ScholarGoogle Scholar
  33. Adrian Benigno Latupeirissa, Panariello Claudio, and Roberto Bresin. 2020. Exploring emotion perception in sonic HRI. In Sound and Music Computing Conference, Torino, 24--26 June 2020. Zenodo.Google ScholarGoogle Scholar
  34. Adrian Benigno Latupeirissa, Emma Frid, and Roberto Bresin. 2019. Sonic characteristics of robots in films. In Sound and Music Computing Conference.Google ScholarGoogle Scholar
  35. Michal Luria, Samantha Reig, Xiang Zhi Tan, Aaron Steinfeld, Jodi Forlizzi, and John Zimmerman. 2019 a. Re-Embodiment and Co-Embodiment: Exploration of social presence for robots and conversational agents. In Proceedings of the 2019 on Designing Interactive Systems Conference - DIS '19. ACM Press, San Diego, CA, USA, 633--644. https://doi.org/10.1145/3322276.3322340Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Michal Luria, John Zimmerman, and Jodi Forlizzi. 2019 b. Championing Research Through Design in HRI. (2019).Google ScholarGoogle Scholar
  37. Winter Mason and Siddharth Suri. 2012. Conducting behavioral research on Amazon's Mechanical Turk. Behavior research methods, Vol. 44, 1 (2012).Google ScholarGoogle Scholar
  38. Dylan Moore, Rebecca Currano, and David Sirkin. 2020. Sound Decisions: How Synthetic Motor Sounds Improve Autonomous Vehicle-Pedestrian Interactions. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications.Google ScholarGoogle Scholar
  39. Dylan Moore, Tobias Dahl, Paula Varela, Wendy Ju, Tormod Næs, and Ingunn Berget. 2019. Unintended Consonances: Methods to Understand Robot Motor Sound Perception. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19. ACM Press, Glasgow, Scotland Uk. https://doi.org/10.1145/3290605.3300730Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Dylan Moore, Hamish Tennent, Nikolas Martelaro, and Wendy Ju. 2017. Making Noise Intentional: A Study of Servo Sound Perception. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17. ACM Press, Vienna, Austria. https://doi.org/10.1145/2909824.3020238Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Ioana Ocnarescu and Isabelle Cossin. 2019. The Contribution of Art and Design to Robotics. In International Conference on Social Robotics. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Stefan Palan and Christian Schitter. 2018. Prolific. ac-A subject pool for online experiments. Journal of Behavioral and Experimental Finance, Vol. 17 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  43. Hannah RM Pelikan, Mathias Broth, and Leelo Keevallik. 2020. "Are You Sad, Cozmo?" How Humans Make Sense of a Home Robot's Emotion Displays. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Robin Read and Tony Belpaeme. 2014. Situational context directs how people affectively interpret robotic non-linguistic utterances. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14. ACM Press, Bielefeld, Germany. https://doi.org/10.1145/2559636.2559680Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Frederic Anthony Robinson, Oliver Bown, and Mari Velonaki. 2020. Implicit Communication through Distributed Sound Design: Exploring a New Modality in Human-Robot Interaction. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI '20 Companion). ACM, Cambridge, United Kingdom. https://doi.org/10.1145/3371382.3377431Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Markus Schwenk and Kai O. Arras. 2014. R2-D2 Reloaded: A flexible sound synthesis system for sonic human-robot interaction design. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, Edinburgh, UK. https://doi.org/10.1109/ROMAN.2014.6926247Google ScholarGoogle Scholar
  47. Charles Spence and Qian Wang. 2015. Sensory expectations elicited by the sounds of opening the packaging and pouring a beverage. Flavour, Vol. 4, 1 (Dec. 2015), 35. https://doi.org/10.1186/s13411-015-0044-yGoogle ScholarGoogle ScholarCross RefCross Ref
  48. Hamish Tennent, Dylan Moore, Malte Jung, and Wendy Ju. 2017. Good vibrations: How consequential sounds affect perception of robotic arms. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Lisbon. https://doi.org/10.1109/ROMAN.2017.8172414Google ScholarGoogle ScholarCross RefCross Ref
  49. Raquel Thiessen, Daniel J Rea, Diljot S Garcha, Cheng Cheng, and James E Young. 2019. Infrasound for HRI: A Robot Using Low-Frequency Vibrations to Impact How People Perceive its Actions. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) . IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  50. Gabriele Trovato, Martin Do, Ömer Terlemez, Christian Mandery, Hiroyuki Ishii, Nadia Bianchi-Berthouze, Tamim Asfour, and Atsuo Takanishi. 2016. Is hugging a robot weird? Investigating the influence of robot appearance on users' perception of hugging. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Gabriele Trovato, Renato Paredes, Javier Balvin, Francisco Cuellar, Nicolai Baek Thomsen, Soren Bech, and Zheng-Hua Tan. 2018. The Sound or Silence: Investigating the Influence of Robot Noise on Proxemics. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, Nanjing. https://doi.org/10.1109/ROMAN.2018.8525795Google ScholarGoogle ScholarCross RefCross Ref
  52. René Tünnermann, Jan Hammerschmidt, and Thomas Hermann. 2013. Blended sonification -- sonification for casual information interaction. Georgia Institute of Technology.Google ScholarGoogle Scholar
  53. René Van Egmond. 2008. The experience of product sounds. In Product experience. Elsevier.Google ScholarGoogle Scholar
  54. Katsumi Watanabe and Shinsuke Shimojo. 2001. When Sound Affects Vision: Effects of Auditory Grouping on Visual Motion Perception . Psychological Science, Vol. 12, 2 (March 2001), 109--116. https://doi.org/10.1111/1467--9280.00319Google ScholarGoogle ScholarCross RefCross Ref
  55. Hadley Wickham. 2016. ggplot2: elegant graphics for data analysis .springer.Google ScholarGoogle Scholar
  56. Jason Wolford, Ben Gabaldon, Jordan Rivas, and Brian Min. 2019. Condition-Based Robot Audio Techniques. Google Patents.Google ScholarGoogle Scholar
  57. Kevin J. P. Woods, Max H. Siegel, James Traer, and Josh H. McDermott. 2017. Headphone screening to facilitate web-based auditory experiments. Attention, Perception, & Psychophysics, Vol. 79, 7 (Oct. 2017). https://doi.org/10.3758/s13414-017--1361--2Google ScholarGoogle ScholarCross RefCross Ref
  58. Selma Yilmazyildiz, Robin Read, Tony Belpeame, and Werner Verhelst. 2016. Review of Semantic-Free Utterances in Social Human--Robot Interaction . International Journal of Human-Computer Interaction, Vol. 32, 1 (Jan. 2016). https://doi.org/10.1080/10447318.2015.1093856Google ScholarGoogle ScholarCross RefCross Ref
  59. John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '07. ACM Press, San Jose, California, USA. https://doi.org/10.1145/1240624.1240704Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Udo Zölzer. 2011. DAFX: digital audio effects. John Wiley & Sons. https://www.dafx.de/DAFX_Book_Page_2nd_edition/index.htmlGoogle ScholarGoogle Scholar
  61. Elif Özcan and René van Egmond. 2006. Product sound design and application: An overview. In Proceedings of the fifth international conference on design and emotion, Gothenburg.Google ScholarGoogle Scholar

Index Terms

  1. Smooth Operator: Tuning Robot Perception Through Artificial Movement Sound

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        HRI '21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
        March 2021
        425 pages
        ISBN:9781450382892
        DOI:10.1145/3434073
        • General Chairs:
        • Cindy Bethel,
        • Ana Paiva,
        • Program Chairs:
        • Elizabeth Broadbent,
        • David Feil-Seifer,
        • Daniel Szafir

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 8 March 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate242of1,000submissions,24%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader