skip to main content
10.1145/2702613.2732792acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

Augmenting Affect from Speech with Generative Music

Authors Info & Claims
Published:18 April 2015Publication History

ABSTRACT

In this work we propose a prototype to improve interpersonal communication of emotions. Therefore music is generated with the same affect as when humans talk on the fly. Emotions in speech are detected and conveyed to music according to music psychological rules. Existing evaluated modules from affective generative music and speech emotion detection, use cases, emotional models and projected evaluations are discussed.

References

  1. Icons by SimpleIcon.com & Flaticon.com, CC BY 3.0.Google ScholarGoogle Scholar
  2. Anagnostopoulos, C. N., Iliou, T., Giannoukos, I. Features and classifiers for emotion recognition from speech: a survey from 2000-2011. Springer, 2012.Google ScholarGoogle Scholar
  3. Anderson, D. J., Adolphs, R. A framework for studying emotions across species. Cell 157, 1 (2014), 187.Google ScholarGoogle ScholarCross RefCross Ref
  4. Baghel, J. K. Audio-based characterization of conversations. Master's thesis, Technische Universitat Munchen, Institut fur Informatik.Google ScholarGoogle Scholar
  5. Bartels, A., Rolfes, M., Burkhardt, F., Technical University Berlin. Berlin database of emotional speech, requested on 20/11/2014 11:30am.Google ScholarGoogle Scholar
  6. Batliner, A., Steidl, S., Noth, E. Releasing a thoroughly annotated and processed spontaneous emotional database: the fau aibo emotion corpus. In Proc. of a satellite workshop of LREC (2008), 28--31.Google ScholarGoogle Scholar
  7. Eyben, F., Weninger, F., Gros, F., Schuller, B. Recent developments in opensmile, the munich open-source multimedia feature extractor. 835--838. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Gabrielsson, A., Lindström, E. The influence of musical structure on emotional expression.Google ScholarGoogle Scholar
  9. Galizio, M., Hendrick, C. Effect of musical accompaniment on attitude: The guitar as a prop for persuasion. Journal of Applied Social Psychology 2, 4 (1972), 350--359.Google ScholarGoogle ScholarCross RefCross Ref
  10. Ghaziuddin, M. Defining the behavioral phenotype of asperger syndrome. Journal of Autism and Developmental Disorders 38, 1 (2008), 138--142.Google ScholarGoogle ScholarCross RefCross Ref
  11. Gomez, P., Danuser, B. Relationships between musical structure and psychophysiological measures of emotion. Emotion 7, 2 (2007), 377.Google ScholarGoogle ScholarCross RefCross Ref
  12. Juslin, P. N., Laukka, P. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological bulletin 129, 5 (2003), 770.Google ScholarGoogle Scholar
  13. Koelsch, S. Brain correlates of music-evoked emotions. Nature Reviews Neuroscience 15, 3 (2014), 170--180.Google ScholarGoogle ScholarCross RefCross Ref
  14. Kostoulas, T., Ganchev, T., Lazaridis, A., and Fakotakis, N. Enhancing emotion recognition from speech through feature selection. In Text, speech and dialogue, Springer (2010), 338--344. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Livingstone, S. R., Muhlberger, R., Brown, A. R., Thompson, W. F. Changing musical emotion: A computational rule system for modifying score and performance. Computer MusicJournal 34 (2010), 41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Marjolijn H., Ren S., Pijnenborg, K. & M., Aleman, A. Impaired recognition and expression of emotional prosody in schizophrenia: Review and meta-analysis. Schizophrenia Research 96, 13 (2007), 135 -- 145.Google ScholarGoogle Scholar
  17. McCann, J., Peppe, S. Prosody in autism spectrum disorders: a critical review. International Journal of Language & Communication Disorders 38, 4 (2003), 325--350.Google ScholarGoogle ScholarCross RefCross Ref
  18. Minzenberg, M. J., Poole, J. H., Vinogradov, S. Social-emotion recognition in borderline personality disorder. Comprehensive Psychiatry 47, 6 (2006), 468.Google ScholarGoogle ScholarCross RefCross Ref
  19. Rao, K. S., S. G. Koolagudi. Robust Emotion Recognition using Spectral and Prosodic Features. Springer, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Ross, E. D. Affective prosody and the aprosodias.Google ScholarGoogle Scholar
  21. Russell, J. A. A circumplex model of affect. Journal of personality and social psychology 39, 6 (1980), 1161.Google ScholarGoogle Scholar
  22. Russell, J. A., Weiss, A., Mendelsohn, G. A. Affect grid: A single-item scale of pleasure and arousal.Google ScholarGoogle Scholar
  23. Saulnier, C. A., Klin, A. Brief report: social and communication abilities and disabilities in higher functioning individuals with autism and asperger syndrome. Journal of autism and developmental disorders 37, 4 (2007), 788--793.Google ScholarGoogle Scholar
  24. Schuller, B., Steidl, S., Batliner, A. The interspeech 2009 emotion challenge. INTERSPEECH, 312--315.Google ScholarGoogle Scholar
  25. Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Devillers, L., Muller, C., Narayanan, S. S. The interspeech 2010 paralinguistic challenge. In In Proceedings of InterSpeech (2010).Google ScholarGoogle ScholarCross RefCross Ref
  26. Schulze, F., Groh, G. Studying how character of conversation affects personal receptivity to mobile notifications. In CHI'14 Extended Abstracts, ACM (2014), 1729--1734. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Stratton, V. N., Zalanowski, A. H. Affective impact of music vs. lyrics. Empirical Studies of the Arts 12, 2 (1994), 173--184.Google ScholarGoogle ScholarCross RefCross Ref
  28. Wallis, I., Ingalls, T., Campana, E. Computer-generating emotional music: The design of an affective music algorithm. DAFx-08, Espoo, Finland (2008), 7--12.Google ScholarGoogle Scholar
  29. Wallis, I., Ingalls, T., Campana, E., Goodman, J. A rule-based generative music system controlled by desired valence and arousal. In Proceedings of 8th international sound and music computing conference (SMC) (2011).Google ScholarGoogle Scholar
  30. Watzlawick, P., Bavelas, J. B., Jackson, D. D. Pragmatics of human communication: A study of interactional patterns, pathologies and paradoxes. WW Norton & Company, 2011.Google ScholarGoogle Scholar

Index Terms

  1. Augmenting Affect from Speech with Generative Music

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI EA '15: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems
      April 2015
      2546 pages
      ISBN:9781450331463
      DOI:10.1145/2702613

      Copyright © 2015 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 April 2015

      Check for updates

      Qualifiers

      • Work in Progress

      Acceptance Rates

      CHI EA '15 Paper Acceptance Rate379of1,520submissions,25%Overall Acceptance Rate6,164of23,696submissions,26%

      Upcoming Conference

      CHI '24
      CHI Conference on Human Factors in Computing Systems
      May 11 - 16, 2024
      Honolulu , HI , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader