An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques

An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques

Hao-Chiang Koong Lin, Cong Jie Sun, Bei Ni Su, Zu An Lin
Copyright: © 2013 |Volume: 3 |Issue: 3 |Pages: 14
ISSN: 2155-6873|EISSN: 2155-6881|EISBN13: 9781466634602|DOI: 10.4018/ijopcd.2013070104
Cite Article Cite Article

MLA

Lin, Hao-Chiang Koong, et al. "An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques." IJOPCD vol.3, no.3 2013: pp.62-75. http://doi.org/10.4018/ijopcd.2013070104

APA

Lin, H. K., Sun, C. J., Su, B. N., & Lin, Z. A. (2013). An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques. International Journal of Online Pedagogy and Course Design (IJOPCD), 3(3), 62-75. http://doi.org/10.4018/ijopcd.2013070104

Chicago

Lin, Hao-Chiang Koong, et al. "An Automatic Mechanism to Recognize and Generate Emotional MIDI Sound Arts Based on Affective Computing Techniques," International Journal of Online Pedagogy and Course Design (IJOPCD) 3, no.3: 62-75. http://doi.org/10.4018/ijopcd.2013070104

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

All kinds of arts have the chance to be represented in digital forms, and one of them is the sound art, including ballads by word of mouth, classical music, religious music, popular music and emerging computer music. Recently, affective computing has drowned a lot of attention in the academic field, and it has two parts: physiology and psychology. Through a variety of sensing devices, the authors can get behaviors which are represented by feelings and emotions. Therefore, the authors may not only identify but also understand human emotions. This work focuses on exploring and producing the MAX/MSP computer program which can generate the emotional music automatically. It can also recognize the emotion identified when users play MIDI instruments and create visual effects. The authors hope to achieve two major goals: (1) Producing the performance of art combined with dynamic vision and auditory tune. (2) Making computers understand human emotions and interact with music by affective computing. The results of this study are as follows:(1) The authors design a corresponding mechanism of music tone and human emotion recognition. (2) The authors develop a combination of affective computing and the auto music generator. (3) The authors design a music system which can be used with MIDI instrument and also be incorporated with other music effects to add the Musicality. (4) The authors Assess and complete the emotion discrimination mechanism of how mood music can feedback accurately. The authors make computers simulate (even have) human emotion, and obtain relevant basis for more accurate sound feedback. The authors use System Usability Scale to analyze and discuss about the usability of the system. Also, the average score of each item is obviously higher than the simple score (four points) for the overall response and the performance of music when we use “auto mood music generator”. There are average performance which is more than five points in each part of Interaction and Satisfaction Scale. Subjects are willing to accept this interactive work, so it proves that the work has the usability and the potential which the authors can keep developing on.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.