Abstract
The paper aimed to develop an automatic music generation method for video games that could create various types of music to enhance player immersion and experience, while also being a more cost-effective alternative to human-composed music. The issue of automatic music generation is an interdisciplinary research area that combines topics such as artificial intelligence, music theory, art history, sound engineering, signal processing, and psychology. As a result, the literature to review is vast. Musical compositions can be analyzed from various perspectives, and their applications are extensive, including films, games, and advertisements. Specific methods of music generation may perform better only in a narrow field and range, although we rarely encounter fully computer-generated music nowadays. The proposed algorithm, which utilizes RNN and 4 parameters to control the generation process, was implemented using PyTorch and real-time communication with the game was established using the WebSocket protocol. The algorithm was tested with 14 players who played four levels, each with different background music that was either composed or live-generated. The results showed that the generated music was enjoyed more by the players than the composed music. After implementing improvements from the first round of play tests, all of the generated music received better evaluations than the composed looped music in the second run.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)
Briot, J.P., Hadjeres, G., Pachet, F.D.: Deep learning techniques for music generation-a survey. arXiv preprint arXiv:1709.01620 (2017)
Brown, D.: Mezzo: an adaptive, real-time composition program for game soundtracks. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 8, pp. 68–72 (2012)
Cho, K., Van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014)
Colombo, F., Seeholzer, A., Gerstner, W.: Deep artificial composer: a creative neural network model for automated melody generation. In: Correia, J., Ciesielski, V., Liapis, A. (eds.) EvoMUSART 2017. LNCS, vol. 10198, pp. 81–96. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55750-2_6
Ferreira, L., Lelis, L., Whitehead, J.: Computer-generated music for tabletop role-playing games. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 16, pp. 59–65 (2020)
Gunawan, A., Iman, A., Suhartono, D.: Automatic music generator using recurrent neural network. Int. J. Comput. Intell. Syst. 13, 645 (2020). https://doi.org/10.2991/ijcis.d.200519.001
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Krumhansl, C.L.: Cognitive Foundations of Musical Pitch. Oxford University Press, Oxford (2001)
Meade, N., Barreyre, N., Lowe, S.C., Oore, S.: Exploring conditioning for generative music systems with human-interpretable controls. arXiv preprint arXiv:1907.04352 (2019)
Oore, S., Simon, I., Dieleman, S., Eck, D., Simonyan, K.: This time with feeling: learning expressive musical performance. Neural Comput. Appl. 32(4), 955–967 (2020)
Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)
Scirea, M., Eklund, P., Togelius, J., Risi, S.: Evolving in-game mood-expressive music with metacompose. In: Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, pp. 1–8 (2018)
Zheng, K., et al.: Emotionbox: a music-element-driven emotional music generation system using recurrent neural network. arXiv preprint arXiv:2112.08561 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kopel, M., Antczak, D., Walczyński, M. (2023). Generating Music for Video Games with Real-Time Adaptation to Gameplay Pace. In: Nguyen, N.T., et al. Intelligent Information and Database Systems. ACIIDS 2023. Lecture Notes in Computer Science(), vol 13995. Springer, Singapore. https://doi.org/10.1007/978-981-99-5834-4_21
Download citation
DOI: https://doi.org/10.1007/978-981-99-5834-4_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-5833-7
Online ISBN: 978-981-99-5834-4
eBook Packages: Computer ScienceComputer Science (R0)