skip to main content
10.1145/3658852.3659068acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmocoConference Proceedingsconference-collections
research-article
Open access

Multimodal Looper: A Live-Looping System for Gestural and Audio-visual Improvisation

Published: 27 June 2024 Publication History

Abstract

We present the Multimodal Looper, an embodied digital interface that connects body movements to audiovisual forms for musical improvisation. It extends the performative practice of the musical looper, which enables musicians to record and play back multiple layers of sound in real-time. With the multimodal looper, we explore music cognition from an embodied perspective, considering cross-modal correspondences in order to achieve an intuitive method suitable for collaboration. Our goal is to create a live-looping system with multimodal objects that are gesturally activated and visually represented. Our first prototype focused on the essential modules of a live-looping system: for gesture recognition, we used a depth camera and a decision tree; for visuals that correspond to distinct categories of sounds, e.g., sustained, iterative, and impulsive, we employed procedural generation techniques such as animated noise, feedback loops, instancing, and various mathematical operations. The system’s effectiveness in establishing cross-modal correspondences for a multisensory experience was evaluated through user testing.

References

[1]
Andrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, 2023. MusicLM: Generating music from text. arXiv preprint arXiv:2301.11325 (2023). https://arxiv.org/abs/2301.11325
[2]
Andrew Bremner, Serge Caparos, Jules Davidoff, Jan Fockert, Karina Linnell, and Charles Spence. 2012. "Bouba" and "Kiki" in Namibia? A remote culture make similar shape-sound matches, but different shape-taste matches to Westerners. Cognition 126 (10 2012). https://doi.org/10.1016/j.cognition.2012.09.007
[3]
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. 2023. Simple and Controllable Music Generation. arxiv:2306.05284 [cs.SD]
[4]
Enrico Costanza, Simon Shelley, and John A. Robinson. 2003. Introducing Audio D-Touch: a tangible user interface for music composition and performance. In Intl. Conf. Digital Audio Effects (DAFx). 118–122.
[5]
Aleksandra Ćwiek, Susanne Fuchs, Christoph Draxler, Eva Liina Asu, Dan Dediu, Katri Hiovain, Shigeto Kawahara, Sofia Koutalidis, Manfred Krifka, Pärtel Lippus, Gary Lupyan, Grace E. Oh, Jing Paul, Caterina Petrone, Rachid Ridouane, Sabine Reiter, Nathalie Schümchen, Ádám Szalontai, Özlem Ünal-Logacev, Jochen Zeller, Marcus Perlman, and Bodo Winter. 2021. The bouba/kiki effect is robust across cultures and writing systems. Philosophical Transactions of the Royal Society B: Biological Sciences 377 (2021).
[6]
Rebecca Fiebrink and Perry Cook. 2010. The Wekinator: A System for Real-time, Interactive Machine Learning in Music. Proceedings of The Eleventh International Society for Music Information Retrieval Conference (ISMIR 2010) (01 2010).
[7]
Rolf Inge Godøy. 2018. Sonic Object Cognition. In Springer Handbook of Systematic Musicology, Ralf Bader (Ed.). Springer, Berlin, Heidelberg, Germany. https://doi.org/10.1007/978-3-662-55004-5_35
[8]
Rolf Godøy. 2006. Gestural-Sonorous Objects: Embodied extensions of Schaeffer’s conceptual apparatus. Organised Sound 11 (08 2006), 149 – 157. https://doi.org/10.1017/S1355771806001439
[9]
Rolf Godøy. 2010. Gesture Affordances of Musical Sound. Musical Gestures: Sound, Movement, and Meaning (01 2010).
[10]
R.S. Hatten. 2004. Interpreting Musical Gestures, Topics, and Tropes: Mozart, Beethoven, Schubert. Indiana University Press, United States. https://books.google.dk/books?id=MfVZQTfYSbMC
[11]
Sergi Jordà. 2010. The reactable: tangible and tabletop music performance. CHI ’10 Extended Abstracts on Human Factors in Computing Systems (2010). https://api.semanticscholar.org/CorpusID:18693922
[12]
Dafna Kohn and Zohar Eitan. 2009. Musical Parameters and Children’s Movement Responses. In ESCOM 2009 : Abstracts I& programme, Päivi-Sisko Eerola (Ed.). Dep. of Music, Univ. of Jyväskylä, Finland, 233–241.
[13]
Mats Küssner. 2014. Shape, drawing and gesture: Cross-modal mappings of sound and music. Ph. D. Dissertation. King’s College London, UK. https://doi.org/10.13140/RG.2.1.2213.3600
[14]
Marc Leman. 2007. Embodied Music Cognition and Mediation Technology. The MIT Press, Cambridge, United States.
[15]
Daphne Maurer, Thanujeni Pathman, and Catherine Mondloch. 2006. The shape of boubas: Sound-shape correspondences in toddlers and adults. Developmental science 9 (05 2006), 316–22. https://doi.org/10.1111/j.1467-7687.2006.00495.x
[16]
Richard Middleton. 1993. Popular Music Analysis and Musicology: Bridging the Gap. Popular Music 12, 2 (1993), 177–190. http://www.jstor.org/stable/931297
[17]
Thomas Mitchell and Imogen Heap. 2011. SoundGrasp: A Gestural Interface for the Performance of Live Music. In 11th International Conference on New Interfaces for Musical Expression, NIME 2011, Oslo, Norway, May 30 - June 1, 2011. nime.org, Oslo, Norway, 465–468. http://www.nime.org/proceedings/2011/nime2011_465.pdf
[18]
Robin Otterbein, Elizabeth Jochum, Daniel Overholt, Shaoping Bai, and Alex Dalsgaard. 2022. Dance and Movement-Led Research for Designing and Evaluating Wearable Human-Computer Interfaces. In Proceedings of the 8th International Conference on Movement and Computing (Chicago, IL, USA) (MOCO ’22). Association for Computing Machinery, New York, NY, USA, Article 9, 9 pages. https://doi.org/10.1145/3537972.3537984
[19]
Nicolas H. Rasamimanana, Frédéric Bevilacqua, Norbert Schnell, Fabrice Guédy, Emmanuel Fléty, Come Maestracci, Bruno Zamborlin, Jean-Louis Frechin, and Uros Petrevski. 2010. Modular musical objects towards embodied control of digital music. In Proc. Intl. Conf. Tangible, Embedded, and Embodied Interaction (TEI). ACM, 9–12. https://doi.org/10.1145/1935701.1935704
[20]
Refsum Jensenius, Alexander and Wanderley, Marcelo M. and Godøy, Rolf Inge and Leman, Marc. 2009. Musical gestures : concepts and methods in research. In Musical gestures : sound, movement, and meaning, Godøy, Rolf Inge and Leman, Marc (Ed.). Routledge, 12–35. https://doi.org/10.4324/9780203863411-8
[21]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR). IEEE, United States, 10684–10695.
[22]
P. Schaeffer. 2016. Traité des objets musicaux. Editions du Seuil, France. https://books.google.dk/books?id=Cy66DAAAQBAJ
[23]
Stefania Serafin, Stefano Trento, Francesco Grani, Hannah Perner-Wilson, Sebastian Madgwick, and Tom Mitchell. 2014. Controlling Physically Based Virtual Musical Instruments Using The Gloves. In Proc. NIME. London, UK, 521–524. http://www.nime.org/nime2014/
[24]
Kıvanç Tatar, Philippe Pasquier, and Remy Siu. 2018. REVIVE: An Audio-Visual Performance with Musical and Visual AI Agents. In Proc. CHI EA. Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3177771
[25]
Peter Walker, J. Bremner, Uschi Mason, Jo Spring, Karen Mattock, Alan Slater, and Scott Johnson. 2010. Preverbal Infants’ Sensitivity to Synaesthetic Cross-Modality Correspondences. Psychological science 21 (01 2010), 21–5. https://doi.org/10.1177/0956797609354734

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
MOCO '24: Proceedings of the 9th International Conference on Movement and Computing
May 2024
245 pages
ISBN:9798400709944
DOI:10.1145/3658852
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 June 2024

Check for updates

Author Tags

  1. Embodied Interaction
  2. Improvisation
  3. Multimodal Systems
  4. Visual Music

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

MOCO '24

Acceptance Rates

MOCO '24 Paper Acceptance Rate 35 of 75 submissions, 47%;
Overall Acceptance Rate 85 of 185 submissions, 46%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 151
    Total Downloads
  • Downloads (Last 12 months)151
  • Downloads (Last 6 weeks)25
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media