ABSTRACT
It is essential to select/assign appropriate background mu- sic (BGM) for each scene/cut when we edit a video or a slideshow of photos. However, it is a laborious task. Aim- ing to realise automatic BGM selection/assignment, we pro- pose a method to automatically assign emotion tag to various BGM. To realise this method, we need a model for classify- ing BGM. To build our model, we use a set of movie scene BGMs that a group of 14 users tagged with five (5) differ- ent sentiments: Love, Surprise, Joy, Sadness, and Fear. Af- ter confirming their agreements, we extracted the features of each audio file of our dataset. Using the machine-learning tool WEKA and the random forest algorithm, we built a model. Through a cross validation process, we evaluated our model and obtained an accuracy of 94% in prediction of the emotion in the BGM, demonstrating the effectiveness of the proposed approach.
- Daiki Kato, Ryosuke Yamanishi, and Junichi Fukumoto. Segmentation and impression estimation of novels for automatic assignment of bgm. In Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, pages 239--243. IEEE, 2015. Google ScholarDigital Library
Index Terms
- Poster: Sentiment Analysis of BGM Toward Automatic BGM Selection Based on Emotion
Recommendations
Visual context effects on the perception of musical emotional expressions
BioID_MultiComm'09: Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communicationIs there any evidence that context plays a role in the perception of the emotional feeling aroused by emotional musical expressions?. This work tries to answer the above question through a series of experiments where subjects were asked to label as ...
The Effects of Various Music on Angry Drivers' Subjective, Behavioral, and Physiological States
AutomotiveUI '16 Adjunct: Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular ApplicationsEmotions influence the way drivers process and react to internal or environmental factors. Specifically, anger is a serious threat on the road. While driving, a majority of drivers listen to music, which is quite an emotional stimulus. In this study, we ...
Learning to Make Feelings: Expressive Performance as a Part of a Machine Learning Tool for Sound-Based Emotion Control
CMMR 2012: Revised Selected Papers of the 9th International Symposium on From Sounds to Music and Emotions - Volume 7900We propose to significantly extend our work in EEG-based emotion detection for automated expressive performances of algorithmically composed music for affective communication and induction. This new system involves music composed and expressively ...
Comments