skip to main content
10.1145/3501409.3501691acmotherconferencesArticle/Chapter ViewAbstractPublication PageseitceConference Proceedingsconference-collections
research-article

Melody Generation with Emotion Constraint

Authors Info & Claims
Published:31 December 2021Publication History

ABSTRACT

At present, most of the melody generation models consider the introduction of chord, rhythm and other constraints in the melody generation process to ensure the quality of the melody generation. While all of them ignore the importance of emotion in melody generation. Music is an emotional art. As the primary part of a piece of music, melody usually has a clear emotional expression. Therefore, it is necessary to introduce emotion information and constraints to generate a melody with clear emotional expression, which means the model should have the ability to learn the relevant characteristics of emotions according to the given information and constraints. To this end, we propose a melody generation model ECMG with emotion constraints. The model takes Generative Adversarial Network (GAN) as the main body, and adds emotion encoder and emotion classifier to introduce emotion information and emotional constraints. We conducted quality evaluation and emotion evaluation of the melody generated by ECMG. In the evaluation of quality, the quality score difference between the melody generated by ECMG and the real melody in the training set is within 0.2, and the quality score of the melody generated by PopMNet is also relatively close. In the evaluation of emotion, the accuracy of emotion classification for both four-category and two-category is much higher than that of completely random probability. These evaluation results show that ECMG can generate melody with specific emotions while ensuring a high quality of generation.

References

  1. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27.Google ScholarGoogle Scholar
  2. Yu Y, Harscoët F, Canales S, et al. Lyrics-Conditioned Neural Melody Generation[C]//International Conference on Multimedia Modeling. Springer, Cham, 2020: 709--714.Google ScholarGoogle Scholar
  3. Zhu H, Liu Q, Yuan N J, et al. Xiaoice band: A melody and arrangement generation framework for pop music[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018: 2837--2846.Google ScholarGoogle Scholar
  4. Chen Y, Lerch A. Melody-Conditioned Lyrics Generation with SeqGANs[J]. arXiv preprint arXiv:2010.14709, 2020.Google ScholarGoogle Scholar
  5. Yuan R, Zhang G, Yang A, et al. Diverse Melody Generation from Chinese Lyrics via Mutual Information Maximization[J]. arXiv preprint arXiv:2012.03805, 2020.Google ScholarGoogle Scholar
  6. Elliot Waite, Douglas Eck, Adam Roberts, and Dan Abolafia. Project Magenta: Generating long-term structure in songs and stories, 2016. https://magenta.tensorflow.org/blog/2016/07/15/lookback-rnn-attention-rnn/.Google ScholarGoogle Scholar
  7. Yang L C, Chou S Y, Yang Y H. MidiNet: A convolutional generative adversarial network for symbolic-domain music generation[J]. arXiv preprint arXiv:1703.10847, 2017.Google ScholarGoogle Scholar
  8. Scirea M, Togelius J, Eklund P, et al. Affective evolutionary music composition with MetaCompose[J]. Genetic Programming and Evolvable Machines, 2017, 18(4): 433--465.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Roberts A, Engel J, Raffel C, et al. A hierarchical latent vector model for learning long-term structure in music[J]. arXiv preprint arXiv:1803.05428, 2018.Google ScholarGoogle Scholar
  10. Yu Y, Srivastava A, Canales S. Conditional lstm-gan for melody generation from lyrics[J]. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2021, 17(1): 1--20.Google ScholarGoogle Scholar
  11. Wu J, Liu X, Hu X, et al. PopMNet: Generating structured pop music melodies using neural networks[J]. Artificial Intelligence, 2020, 286: 103303.Google ScholarGoogle ScholarCross RefCross Ref
  12. Laurier C, Herrera P, Mandel M, et al. Audio music mood classification using support vector machine[J]. MIREX task on Audio Mood Classification, 2007: 2--4.Google ScholarGoogle Scholar
  13. Panda R E S, Malheiro R, Rocha B, et al. Multi-modal music emotion recognition: A new dataset, methodology and comparative analysis[C]//10th International Symposium on Computer Music Multidisciplinary Research (CMMR 2013). 2013: 570--582.Google ScholarGoogle Scholar
  14. Russell J A. Core affect and the psychological construction of emotion[J]. Psychological review, 2003, 110(1): 145.Google ScholarGoogle Scholar
  15. Yang Y H, Lin Y C, Su Y F, et al. A regression approach to music emotion recognition[J]. IEEE Transactions on audio, speech, and language processing, 2008, 16(2): 448--457.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lin Y, Chen X, Yang D. Exploration of Music Emotion Recognition Based on MIDI[C]//ISMIR. 2013: 221--226.Google ScholarGoogle Scholar
  17. Delbouys R, Hennequin R, Piccoli F, et al. Music mood detection based on audio and lyrics with deep neural net[J]. arXiv preprint arXiv:1809.07276, 2018.Google ScholarGoogle Scholar
  18. Lin J C, Wei W L, Wang H M. Automatic music video generation based on emotion-oriented pseudo song prediction and matching[C]//Proceedings of the 24th ACM international conference on Multimedia. 2016: 372--376.Google ScholarGoogle Scholar
  19. Madhok R, Goel S, Garg S. SentiMozart: Music Generation based on Emotions[C]//ICAART (2). 2018: 501--506.Google ScholarGoogle Scholar
  20. Liu R, Yuan S, Chen M, et al. MaLiang: An Emotion-driven Chinese Calligraphy Artwork Composition System[C]//Proceedings of the 28th ACM International Conference on Multimedia. 2020: 4394--4396.Google ScholarGoogle Scholar
  21. Hou Y, Yao H, Sun X, et al. Soul Dancer: Emotion-Based Human Action Generation[J]. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2020, 15(3s): 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Raffel C. Learning-based methods for comparing sequences, with applications to audio-to-midi alignment and matching[M]. Columbia University, 2016.Google ScholarGoogle Scholar
  23. Aljanaki A, Yang Y H, Soleymani M. Developing a benchmark for emotional analysis of music[J]. PloS one, 2017, 12(3): e0173392.Google ScholarGoogle Scholar
  24. Panda R, Malheiro R, Paiva R P. Novel audio features for music emotion recognition[J]. IEEE Transactions on Affective Computing, 2018, 11(4): 614--626.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Panda R, Malheiro R, Paiva R P. Musical texture and expressivity features for music emotion recognition[C]//19th International Society for Music Information Retrieval Conference (ISMIR 2018. 2018: 383--391.Google ScholarGoogle Scholar

Index Terms

  1. Melody Generation with Emotion Constraint

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      EITCE '21: Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering
      October 2021
      1723 pages
      ISBN:9781450384322
      DOI:10.1145/3501409

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 December 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      EITCE '21 Paper Acceptance Rate294of531submissions,55%Overall Acceptance Rate508of972submissions,52%
    • Article Metrics

      • Downloads (Last 12 months)45
      • Downloads (Last 6 weeks)10

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader