Skip to main content

Extracting Emotions from Music Data

  • Conference paper
Foundations of Intelligent Systems (ISMIS 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3488))

Included in the following conference series:

Abstract

Music is not only a set of sounds, it evokes emotions, subjectively perceived by listeners. The growing amount of audio data available on CDs and in the Internet wakes up a need for content-based searching through these files. The user may be interested in finding pieces in a specific mood. The goal of this paper is to elaborate tools for such a search. A method for the appropriate objective description (parameterization) of audio files is proposed, and experiments on a set of music pieces are described. The results are summarized in concluding chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Apple: iTunes (2004), http://www.apple.com/itunes/

  2. Batlle, E., Cano, P.: Automatic Segmentation for Music Classification using Competitive Hidden Markov Models. In: Proceedings of International Symposium on Music Information Retrieval, Plymouth, MA (2000); Available at http://www.iua.upf.es/mtg/publications/ismir2000-eloi.pdf

  3. Berger, A.: Error-correcting output coding for text classification. In: IJCAI 1999: Workshop on machine learning for information filtering. Stockholm, Sweden (1999), Available at http://www-2.cs.cmu.edu/~aberger/pdf/ecoc.pdf

  4. Brown, J.C.: Computer identification of musical instruments using pattern recognition with cepstral coefficients as features. J. Acoust. Soc. of America 105, 1933–1941 (1999)

    Article  Google Scholar 

  5. Carletta, J.: Assessing agreement on classification tasks: the kappa statistic. Computational Linguistics 22 (2), 249–254 (1996), Available at http://homepages.inf.ed.ac.uk/jeanc/squib.pdf

    Google Scholar 

  6. Cross, I.: Music, cognition, culture and evolution. Annals of the New York Academy of Sciences 930, 28–42 (2001), Available at http://www-ext.mus.cam.ac.uk/~ic108/PDF/IRMCNYAS.pdf

    Article  Google Scholar 

  7. Dellaert, F., Polzin, T., Waibel, A.: Recognizing Emotion in Speech. In: Proc. ICSLP 1996, vol. 3, pp. 1970–1973 (1996)

    Google Scholar 

  8. Eronen, A., Klapuri, A.: Musical Instrument Recognition Using Cepstral Coefficients and Temporal Features. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2000, Plymouth, MA, pp. 753–756 (2000)

    Google Scholar 

  9. Fujinaga, I., McMillan, K.: Realtime recognition of orchestral instruments. In: Proceedings of the International Computer Music Conference, pp. 141–143 (2000)

    Google Scholar 

  10. Herrera, P., Amatriain, X., Batlle, E., Serra, X.: Towards instrument segmentation for music content description: a critical review of instrument classification techniques. In: Proc. of International Symposium on Music Information Retrieval ISMIR 2000, Plymouth, MA (2000)

    Google Scholar 

  11. Hevner, K.: Experimental studies of the elements of expression in music. American Journal of Psychology 48, 246–268 (1936)

    Article  Google Scholar 

  12. Huron, D.: Sound, music and emotion: An introduction to the experimental research. In: Seminar presentation, Society for Music Perception and Cognition Conference, Massachusetts Institute of Technology, Cambridge (1997)

    Google Scholar 

  13. Juslin, P., Sloboda, J. (eds.): Music and Emotion: Theory and Research. Series in Affective Science. Oxford University Press, Oxford (2001)

    Google Scholar 

  14. Kaminskyj, I.: Multi-feature Musical Instrument Classifier. MikroPolyphonie 6 (2000), Online journal at http://www.mikropol.net/

  15. Kostek, B., Czyzewski, A.: Representing Musical Instrument Sounds for Their Automatic Classification. J. Audio Eng. Soc. 49(9), 768–785 (2001)

    Google Scholar 

  16. Kostek, B., Wieczorkowska, A.: Parametric Representation Of Musical Sounds. Archives of Acoustics 22(1), 3–26 (1997)

    Google Scholar 

  17. Lavy, M.M.: Emotion and the Experience of Listening to Music. A Framework for Empirical Research. PhD. dissertation, Jesus College, Cambridge (2001)

    Google Scholar 

  18. Li, T., Ogihara, M.: Detecting emotion in music. In: 4th International Conference on Music Information Retrieval ISMIR, Washington, D.C., and Baltimore, MD (2003)

    Google Scholar 

  19. de Mantaras, R.L., Arcos, J.L.: AI and Music. From Composition to Expressive Performance. AI Magazine, 43–58 (Fall 2002)

    Google Scholar 

  20. Marasek, K.: Private communication (2004)

    Google Scholar 

  21. Martin, K.D., Kim, Y.E.: Musical instrument identification: A pattern-recognition approach. In: 136 meeting of the Acoustical Soc. of America, Norfolk, VA (1998)

    Google Scholar 

  22. Microsoft Corp.: Windows Media Player (2004), http://www.microsoft.com/

  23. Pachet, F.: Beyond the Cybernetic Jam Fantasy: The Continuator. IEEE Computers Graphics and Applications (January/February 2004); spec. issue on Emerging Technologies

    Google Scholar 

  24. Pachet, F.: Knowledge Management and Musical Metadata. In: Schwartz, D. (ed.) Encyclopedia of Knowledge Management. Idea Group (2005)

    Google Scholar 

  25. Peeters, G., Rodet, X.: Automatically selecting signal descriptors for Sound Classification. In: ICMC 2002 Goteborg, Sweden (2002)

    Google Scholar 

  26. Pollard, H.F., Jansson, E.V.: A Tristimulus Method for the Specification of Musical Timbre. Acustica 51, 162–171 (1982)

    Google Scholar 

  27. Tato, R., Santos, R., Kompe, R., Pardo, J.M.: Emotional Space Improves Emotion Recognition. In: 7th International Conference on Spoken Language Processing ICSLP 2002, Denver, Colorado (2002)

    Google Scholar 

  28. Tzanetakis, G., Cook, P.: Marsyas: A framework for audio analysis. Organized Sound 4(3), 169–175 (2000)

    Article  Google Scholar 

  29. Widmer, G.: Discovering Simple Rules in Complex Data: A Meta-learning Algorithm and Some Surprising Musical Discoveries. Artificial Intelligence 146(2) (2003)

    Google Scholar 

  30. Wieczorkowska, A., Wroblewski, J., Slezak, D., Synak, P.: Application of temporal descriptors to musical instrument sound recognition. Journal of Intelligent Information Systems 21(1), 71–93 (2003)

    Article  Google Scholar 

  31. WordIQ Dictionary (2004), The Internet http://www.wordiq.com/dictionary/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wieczorkowska, A., Synak, P., Lewis, R., W.Raś, Z. (2005). Extracting Emotions from Music Data. In: Hacid, MS., Murray, N.V., Raś, Z.W., Tsumoto, S. (eds) Foundations of Intelligent Systems. ISMIS 2005. Lecture Notes in Computer Science(), vol 3488. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11425274_47

Download citation

  • DOI: https://doi.org/10.1007/11425274_47

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25878-0

  • Online ISBN: 978-3-540-31949-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics