Abstract
In recent years, with the development of artificial intelligence technology, speech recognition technology can perform high-precision interpretation and transcription on voices in various complex environments, improving typing efficiency. However, the text obtained by speech translation is only composed of text and simple punctuation, which hinders the real emotion expression of users. The pale translated text hinders the formation of context, affects the emotional transmission of semantics, and lead to a poor user experience when users communicate with others. Based on user experience and emotion, this article discusses the factors that assist the speech-to-text emotional restoration. Through the qualitative and quantitative study, this research compares four emotional effects of information texts composed by different elements: emoticon, punctuation, interjections, and speech-to-text function of WeChat, and further studies the factors that assist speech-to-text emotion restoration. The research results reveal that emoticon and punctuation have a positive effect on the speech-to-text emotional restoration. The addition of the above two factors can restore the emotional effect of speech in text mode with lower loss, fully improve the user experience in mobile communication, and make the online communication smoother.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Shneiderman, B.: The limits of speech recognition. Commun. ACM 43(9), 63–65 (2000)
Basapur S, Xu S, Ahlenius M, et al.: User expectations from dictation on mobile devices. In: International Conference on Human-Computer Interaction, pp. 217–225 (2007)
Ruan, S., Wobbrock, J.O., Liou, K., et al.: Comparing speech and keyboard text entry for short messages in two languages on touchscreen phones. Proc. ACM Interact. Mob. Wearable Ubiquit. Technol. 1(4), 1–23 (2018)
Kumar, A., Paek, T., Lee, B.: Voice typing: a new speech interaction model for dictation on touchscreen devices. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2277–2286 (2012)
Karat, C.M., Halverson, C., Horn, D., et al.: Patterns of entry and correction in large vocabulary continuous speech recognition systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 568–575 (1999)
Kwon, O.W., Chan, K., Hao, J., et al.: Emotion recognition by speech signals. In: Eighth European Conference on Speech Communication and Technology (2003)
Ekman, P.: Facial expression and emotion. Am. Psychol. 48(4), 384 (1993)
Morency, L.P., Mihalcea, R., Doshi, P.: Towards multimodal sentiment analysis: harvesting opinions from the web. In: Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 169–176 (2011)
Yadollahi, A., Shahraki, A.G., Zaiane, O.R.: Current state of text sentiment analysis from opinion to emotion mining. ACM Comput. Surv. (CSUR) 50(2), 1–33 (2017)
Report of WeChat data (2019). https://mp.weixin.qq.com/s/gi_3xSDWBie-fgg76XXJCg
Report of WeChat data (2018). https://support.weixin.qq.com/cgi-bin/mmsupport-bin/getopendays
Busso, C., Narayanan, S.S.: Joint analysis of the emotional fingerprint in the face and speech: a single subject study. In: 2007 IEEE 9th Workshop on Multimedia Signal Processing, pp. 43–47 (2007)
Arya, A., Jefferies, L.N., Enns, J.T., et al.: Facial actions as visual cues for personality. Comput. Anim. Virtual Worlds 17(3–4), 371–382 (2006)
Huang, A.H., Yen, D.C., Zhang, X.: Exploring the potential effects of emoticons. Inf. Manage. 45(7), 466–473 (2008)
Kalra, A., Karahalios, K.: TextTone: expressing emotion through text. In: IFIP Conference on Human-Computer Interaction, pp. 966–969 (2005)
Motley, M.T.: Facial affect and verbal context in conversation: facial expression as interjection. Hum. Commun. Res. 20(1), 3–40 (1993)
Bailey, D.V., Dürmuth, M., Paar, C.: “Typing” passwords with voice recognition: how to authenticate to Google Glass. In: Proceedings of the Symposium on Usable Privacy and Security, pp. 1–2 (2014)
Acknowledgments
We thank the Foundation for Young Talents in Higher Education of Guangdong, China [Project Batch No. 2020WQNCX061] for the research support. Part of the study was supported by Shenzhen Educational Science Planning Project (zdfz20015).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, X., Deng, Q. (2021). Exploring the Factors Aiding Speech-to-Text Emotional Restoration. In: Soares, M.M., Rosenzweig, E., Marcus, A. (eds) Design, User Experience, and Usability: Design for Diversity, Well-being, and Social Development. HCII 2021. Lecture Notes in Computer Science(), vol 12780. Springer, Cham. https://doi.org/10.1007/978-3-030-78224-5_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-78224-5_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78223-8
Online ISBN: 978-3-030-78224-5
eBook Packages: Computer ScienceComputer Science (R0)