Skip to main content

Dynamic Music Generation, Audio Analysis-Synthesis Methods

  • Living reference work entry
  • First Online:
Encyclopedia of Computer Graphics and Games

Synonyms

Adaptive music systems; Audio collage; Audio mosaicing; Concatenative sound synthesis; Interactive music systems

Definitions

Dynamic music generation systems create ever-different and changing musical structures based on formalized computational methods. Under scope is a subset of these methods which adopt musical audio as a strategy to formalize musical structure which then guides higher-level transformations to be synthesized as new musical audio streams.

Introduction

Technologies which adhere to nonlinear, as opposed to fixed, storytelling are becoming pervasive in the digital media landscape (Lister et al. 2003). In this context, methods for dynamically generating music have been prominently and increasingly adopted in games, virtual and augmented reality, interactive installations, and 360 video. Their adoption is motivated by a wide range of factors from computational constraints (e.g., limited memory) to enhanced interaction with external actuators and artistic...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  • Assayag, G., Bloch, G., Chemillier, M., Cont, A., Dubnov, S.: Omax brothers: a dynamic topology of agents for improvisation learning. In: Proceedings of the 1st ACM Workshop on Audio and Music Computing Multimedia, pp. 125–132. ACM (2006)

    Google Scholar 

  • Bernardes, G., Guedes, C., Pennycook, B.: EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data, pp. 110–129. Springer, Berlin (2013)

    Google Scholar 

  • Bernardes, G., Aly, L., Davies, M.E.P.: Seed: resynthesizing environmental sounds from examples. In: Proceedings of the Sound and Music Computing Conference (2016)

    Google Scholar 

  • Davies, M.E.P., Hamel, P., Yoshii, K., Goto, M.: Automashupper: automatic creation of multi-song music mashups. IEEE/ACM Trans. Audio Speech Lang. Process. 22(12), 1726–1737 (2014). https://doi.org/10.1109/TASLP.2014.2347135. ISSN 2329-9290

    Article  Google Scholar 

  • Einbond, A., Schwarz, D., Borghesi, R., Schnell, N.: Introducing catoracle: corpus-based concatenative improvisation with the audio oracle algorithm. In: Proceedings of the International Computer Music Conference, pp. 140–146 (2016). http://openaccess.city.ac.uk/15424/

  • Frojd, M., Horner, A.: Fast sound texture synthesis using overlap-add. In: International Computer Music Conference (2007)

    Google Scholar 

  • Jehan, T.: Creating Music by Listening. PhD thesis, Massachusetts Institute of Technology (2005)

    Google Scholar 

  • Lamere, P.: The infinite jukebox. www.infinitejuke.com (2012). Accessed 24 May 2016

  • Lister, M., Dovey, J., Giddings, S., Grant, I., Kelly, K.: New Media: A Critical Introduction. Routledge, London (2003)

    Google Scholar 

  • Nierhaus, G.: Algorithmic Composition: Paradigms of Automated Music Generation. Springer Science & Business Media, Wien (2009)

    Book  Google Scholar 

  • Norowi, N.M., Miranda, E.R., Hussin, M.: Parametric factors affecting concatenative sound synthesis. Adv. Sci. Lett. 23(6), 5496–5500 (2017)

    Article  Google Scholar 

  • Pachet, F., Roy, P., Moreira, J., d’Inverno, M.: Reflexive loopers for solo musical improvisation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2205–2208. ACM (2013)

    Google Scholar 

  • Schwarz, D., Schnell, N.: Descriptor-based sound texture sampling. In: Proceedings of the Sound and Music Computing, pp. 510–515 (2010)

    Google Scholar 

  • Schwarz, D., Beller, G., Verbrugghe, B., Britton, S.: Real-time corpus-based concatenative synthesis with CataRT. pp. 279–282. Montreal (2006)

    Google Scholar 

  • Surges, G., Dubnov, S.: Feature selection and composition using pyoracle. In: Ninth Artificial Intelligence and Interactive Digital Entertainment Conference, pp. 19 (2013)

    Google Scholar 

  • Verfaille, V., Arfib, D.: A-dafx: Adaptive digital audio effects. In: Proceedings of the Workshop on Digital Audio Effects, pp. 10–14 (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gilberto Bernardes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Bernardes, G., Cocharro, D. (2019). Dynamic Music Generation, Audio Analysis-Synthesis Methods. In: Lee, N. (eds) Encyclopedia of Computer Graphics and Games. Springer, Cham. https://doi.org/10.1007/978-3-319-08234-9_211-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08234-9_211-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08234-9

  • Online ISBN: 978-3-319-08234-9

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics