Skip to main content

Emotive Acoustics: Sound Design in Robotic Emotion Expression. A Study on Participant Generated Sounds

  • Conference paper
  • First Online:
Human-Computer Interaction (HCII 2024)

Abstract

Human-robot interactions (HRI) can be made much more immersive by using sounds. Regarding this, a study was carried out to explore different types of sounds that may be more suitable for HRI. Participants were presented with a video in which a social robot navigates in a context and expresses six basic emotions in specific situations. The participants generated sounds for each emotion displayed by the robot in the video. Results revealed a preference for human-like sounds, especially onomatopoeia. Moreover, most participants deemed the sounds produced by current robots in general as inappropriate and lacking empathy, primarily due to their resemblance with machine-like noises. The study provides specific insights into the perception of audio expressions, which are helpful in creating inclusive and emotionally compelling HRI. The results highlight how crucial it is to give social robots more human-like auditory features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Frauenberger, C., Putz, V., Holdrich, R., Stockman, T.: Interaction patterns for auditory user interfaces. In: ICAD Proceedings, Limerick, Ireland, pp. 154–160 (2005)

    Google Scholar 

  2. Song, S., Yamada, S.: Expressing emotions through color, sound, and vibration with an appearance-constrained social robot. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 2–11 (2017)

    Google Scholar 

  3. Breazeal, C.: Toward sociable robots. Robot. Auton. Syst. 42(3–4), 167–175 (2003)

    Article  Google Scholar 

  4. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3–4), 143–166 (2003)

    Article  Google Scholar 

  5. Latupeirissa, A.B., Panariello, C., Bresin, R.: Exploring emotion perception in sonic HRI. In: Sound and Music Computing Conference, Torino, 24–26 June 2020, pp. 434–441, Zenodo (2020)

    Google Scholar 

  6. Moore, D., Ju, W.: Sound as implicit influence on human-robot interactions. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 311–312 (2018)

    Google Scholar 

  7. Pelikan, H., Robinson, F.A., Keevallik, L., Velonaki, M., Broth, M., Bown, O.: Sound in human-robot interaction. In: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’21 Companion, New York, NY, USA, pp. 706–708, Association for Computing Machinery (2021)

    Google Scholar 

  8. Robinson, F.A., Bown, O., Velonaki, M.: Implicit communication through distributed sound design: exploring a new modality in human-robot interaction. In: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 597–599 (2020)

    Google Scholar 

  9. Yan, H., Ang, M.H., Poo, A.N.: A survey on perception methods for human-robot interaction in social robots. Int. J. Soc. Robot. 6, 85–119 (2014)

    Article  Google Scholar 

  10. Ekman,P.: Emotion in the Human Face . Cambridge Cambridgeshire, New York (1982)

    Google Scholar 

  11. Sequeira, J.S., Ferreira, I.A.: Lessons from the monarch project. In: International Conference on Informatics in Control, Automation and Robotics, vol. 2, pp. 241–248. SCITEPRESS (2016)

    Google Scholar 

  12. Damholdt, M.F., Christina, V., Kryvous, A., Smedegaard, C.V., Seibt, J.: What is in three words? Exploring a three-word methodology for assessing impressions of a social robot encounter online and in real life. Paladyn J. Behav. Robot. 10(1), 438–453 (2019)

    Article  Google Scholar 

  13. Mkpojiogu, E.O., Okeke-Uzodike, O.E., Emmanuel, E.I.: Quality characteristics of an LMS UX psychomotor model for the design and evaluation of learning management systems. In: 3rd International Conference on Integrated Intelligent Computing Communication & Security (ICIIC 2021), pp. 243–249, Atlantis Press (2021)

    Google Scholar 

  14. Tong, L., Lindeman, R.W., Regenbrecht, H.: Viewer’s role and viewer interaction in cinematic virtual reality. Computers 10(5), 66 (2021)

    Article  Google Scholar 

  15. Fu, D., et al.: A trained humanoid robot can perform human-like crossmodal social attention and conflict resolution. Int. J. Soc. Robot. 15, 1325–1340 (2023)

    Article  Google Scholar 

  16. Giambattista, A., Teixeira, L., Ayanoğlu, H., Saraiva, M., Duarte, E.: Expression of emotions by a service robot: a pilot study. In: Marcus, A. (ed.) DUXU 2016, Part III. LNCS, vol. 9748, pp. 328–336. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40406-6_31

    Chapter  Google Scholar 

  17. Ayanoğlu, H., Saraiva M., Teixeira, L., Duarte, E.: Human-robot interaction: exploring the ability to express emotions by a social robot. In: Emotional Design in Human-Robot Interaction: Theory, Methods and Applications, pp. 163–183 (2019)

    Google Scholar 

  18. Zhang, B.J., Fitter, N.T.: Nonverbal sound in human-robot interaction: a systematic review. ACM Trans. Hum.-Robot Interact. 12, 4 (2023)

    Article  Google Scholar 

  19. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006)

    Article  Google Scholar 

  20. Xu, J., Tao, Y., Lin, H.: Semantic word cloud generation based on word embeddings. In: 2016 IEEE Pacific Visualization Symposium (PacificVis), pp. 239–243. IEEE (2016)

    Google Scholar 

Download references

Acknowledgments

Special thanks to the MonarCH project (Multi-Robot Cognitive Systems Operating in Hospitals) for granting permission to utilize their robotic platform. The project, funded under the reference FP7-ICT-2011-9-601033, has been instrumental in facilitating our exploration of human-robot interactions. More information about the MonarCH project can be found at https://welcome.isr.tecnico.ulisboa.pt/projects/multi-robot-cognitive-systems-operating-in-hospitals/.

Furthermore, this study received support from UNIDCOM/IADE under a grant from the Fundação para a Ciência e Tecnologia (FCT), with the reference UIDB/00711/2020 attributed to UNIDCOM/IADE - Unidade de Investigação em Design e Comunicação, Lisbon, Portugal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emília Duarte .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare relevant to this article’s content.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Andrade Pires, G., Tsvetcoff, R., Ayanoglu, H., Duarte, E. (2024). Emotive Acoustics: Sound Design in Robotic Emotion Expression. A Study on Participant Generated Sounds. In: Kurosu, M., Hashizume, A. (eds) Human-Computer Interaction. HCII 2024. Lecture Notes in Computer Science, vol 14685. Springer, Cham. https://doi.org/10.1007/978-3-031-60412-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60412-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60411-9

  • Online ISBN: 978-3-031-60412-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics