skip to main content
10.1145/3623462.3623469acmotherconferencesArticle/Chapter ViewAbstractPublication PageskuiConference Proceedingsconference-collections
research-article

En train d'oublier: toward affective virtual environments

Published:06 December 2023Publication History

ABSTRACT

This study explores the development of intelligent affective virtual environments generated by bimodal emotion recognition techniques and multimodal feedback. A semantic and acoustic analysis predicts emotions conveyed by spoken language, fostering an expressive and transparent control structure. Textual contents and emotional predictions are mapped to virtual environments in real locations as audiovisual feedback.

To demonstrate the application of this system, we developed a case study titled "En train d'oublier," focusing on a train cemetery in Uyuni, Bolivia. The train cemetery holds historical significance as a site where abandoned trains symbolize the passage of time and the interaction between human activities and nature's reclamation. The space is transformed into an immersive and emotionally poetic experience through oral language and affective virtual environments that activate memories, as the system utilizes the transcribed text to synthesize images and modifies the musical output based on the predicted emotional states.

The proposed bimodal emotion recognition techniques achieve 94% and 89% accuracy. The audiovisual mapping strategy allows for considering divergence in predictions generating an intended tension between the graphical and the musical representation. Using video and web art techniques, we experimented with the environments generated to create diverses poetic proposals.

References

  1. Aylett, R., and M. Cavazza. 2001. “Intelligent virtual environments – a state-of-the-art report.” Proceedings of the Eurographics Workshop in Manchester, UK.Google ScholarGoogle Scholar
  2. Kamath, Rajani, and Rajanish Kamat. 2013. “Development of an Intelligent Virtual Environment for Augmenting Natural Language Processing in Virtual Reality Systems.” International Journal of Emerging Trends & Technology in Computer Science (IJETTCS), 198-203.Google ScholarGoogle Scholar
  3. Everett, Stephanie, Kenneth Wauchope, and Manuel Pérez. 1995. “A Natural Language Interface for Virtual Reality Systems.”Google ScholarGoogle Scholar
  4. Levin, Golan, and Zachary Lieberman. 2004. “In-situ speech visualization in real-time interactive installation and performance.”Google ScholarGoogle Scholar
  5. Kitayama, Shinobu, and Hazel R. Markus. 1994. “Emotion and culture: Empirical studies of mutual influence.” American Psychological Association, pp. 1–19.Google ScholarGoogle Scholar
  6. Roach, Peter. 2000. “Techniques for the phonetic description of emotional speech.” Proceedings of the ISCA Workshop on Speech and Emotion.Google ScholarGoogle Scholar
  7. Russell, James. 1980. “A Circumplex Model of Affect.” Journal of Personality and Social Psychology 39(6).Google ScholarGoogle ScholarCross RefCross Ref
  8. Scherer, Klaus. 2005. “What are emotions? And how can they be measured?” Social Science Information 44:695-729.Google ScholarGoogle ScholarCross RefCross Ref
  9. Dash, Adyasha and Kat R. Agres. “AI-Based Affective Music Generation Systems: A Review of Methods, and Challenges.” ArXiv abs/2301.06890 (2023): n. Pag.Google ScholarGoogle Scholar
  10. Cowie, Roddy, Ellen Douglas-Cowie, Nicolas Tsapatsoulis, George Votsis, Stefanos Kollias, Winfried Fellenz, and John G. Taylor. 2001. “Emotion recognition in human-computer interaction.” IEEE Signal Processing magazine 18:32–80.Google ScholarGoogle ScholarCross RefCross Ref
  11. Sra, Misha, Pattie Maes, Prashanth Vijayaraghavan, and Deb Roy. 2017. “Auris: creating affective virtual spaces from music.” Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17), 1-11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Pinilla, Andrés, Andrés Garcia, William Raffe, Jan-Niklas Voigt-Antons, Robert Spang, and Sebastian Müller. 2021. “Affective Visualization in Virtual Reality: An Integrative Review.” Frontiers in Virtual Reality.Google ScholarGoogle Scholar
  13. Jorge Forero, Gilberto Bernardes, and Mónica Mendes. 2022. Emotional Machines: Toward Affective Virtual Environments. In Proceedings of the 30th ACM International Conference on Multimedia (MM '22). Association for Computing Machinery, New York, NY, USA, 7237–7238. https://doi.org/10.1145/3503161.3549973Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Forero, J., Bernardes, G., Mendes, M. (2023). Desiring Machines and Affective Virtual Environments. In: Brooks, A.L. (eds) ArtsIT, Interactivity and Game Creation. ArtsIT 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 479. Springer, Cham. https://doi.org/10.1007/978-3-031-28993-4_28]Google ScholarGoogle ScholarCross RefCross Ref
  15. Mehrabian, A. and Wiener, M. (1967). Decoding of inconsistent communications, Journal of Personality and Social Psychology, 6, 109-114.Google ScholarGoogle ScholarCross RefCross Ref
  16. Mehrabian, A., and Ferris, S.R. (1967), Inference of Attitudes from Nonverbal Communication in Two Channels, Journal of Consulting Psychology, 31, 3, 48-258.Google ScholarGoogle ScholarCross RefCross Ref
  17. Saravia, Elvis & Liu, Hsien-Chi & Huang, Yen-Hao & Wu, Junlin & Chen, Yi-Shin. (2018). CARER: Contextualized Affect Representations for Emotion Recognition. 3687-3697. 10.18653/v1/D18-1404.Google ScholarGoogle Scholar
  18. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.Google ScholarGoogle Scholar
  19. Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One. 2018 May 16;13(5):e0196391. doi: 10.1371/journal.pone.0196391. PMID: 29768426; PMCID: PMC5955500.Google ScholarGoogle ScholarCross RefCross Ref
  20. Bradley, M.M., & Lang, P.J. (1999). Affective norms for English words (ANEW): Instruction manual and affective ratings. Technical Report C-1 The Center for Research in Psychophysiology, University of Florida.Google ScholarGoogle Scholar
  21. Bernardes, Gilberto, Diogo Cocharro, Carlos Guedes, and Matthew Davies. 2016. “Conchord: An Application for Generating Musical Harmony by Navigating in the Tonal Interval Space.” Lecture Notes in Computer Science.Google ScholarGoogle Scholar
  22. Livingstone, Steven & Muhlberger, Ralf & Brown, Andrew & Loch, Andrew. (2007). Controlling musical emotionality: An affective computational architecture for influencing musical emotions. Digital Creativity. 18. 10.1080/14626260701253606.Google ScholarGoogle Scholar

Index Terms

  1. En train d'oublier: toward affective virtual environments

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        KUI '23: Proceedings of the 20th International Conference on Culture and Computer Science: Code and Materiality
        September 2023
        190 pages
        ISBN:9798400708367
        DOI:10.1145/3623462
        • Editors:
        • Lucas Fabian Olivero,
        • António Bandeira Araújo

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 6 December 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)21
        • Downloads (Last 6 weeks)0

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format