skip to main content
10.1145/3672539.3686735acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
extended-abstract

CrAIzy MIDI: AI-powered Wearable Musical Instrumental for Novice Player

Published: 13 October 2024 Publication History

Abstract

Playing music is a deeply fulfilling and universally cherished activity, yet the steep learning curve often discourages novice amateurs. Traditional music creation demands significant time and effort to master musical theory, instrumental mechanics, motor skills, and notation reading. To lower these barriers, innovative technology-driven approaches are necessary. This proposal introduces CrAIzy MIDI, an AI-powered wearable musical instrument designed to simplify and enhance the music-playing experience for beginners. CrAIzy MIDI integrates three key technologies: wearable user interfaces, AI-generated music, and multi-modality tools. The wearable interface allows users to play multiple instruments using intuitive finger and palm movements, reducing the complexity of traditional instruments. AI-generated music segments enable users to input a few pitches and have the AI complete the musical piece, aiding beginners in overcoming composition challenges. The multi-modality experience enhances engagement by allowing adjustments in music effects through visual stimuli such as light color and intensity changes. Together, these features make music creation more accessible and enjoyable, fostering continuous practice and exploration for novice musicians.

References

[1]
Doga Cavdir and Ge Wang. 2022. Designing felt experiences with movement-based, wearable musical instruments: From inclusive practices toward participatory design. Wearable Technologies 3 (2022). https://api.semanticscholar.org/CorpusID:251597721
[2]
Ondřej Cífka, Umut Şimşekli, and Gaël Richard. 2020. Groove2groove: One-shot music style transfer with supervision from synthetic data. IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2020), 2638–2650.
[3]
Fred Collopy. 2000. Color, form, and motion: Dimensions of a musical art of light. Leonardo 33, 5 (2000), 355–360.
[4]
Perry Cook. 2017. 2001: Principles for designing computer music controllers. A NIME Reader: Fifteen years of new interfaces for musical expression (2017), 1–13.
[5]
Seth* Forsgren and Hayk* Martiros. 2022. Riffusion - Stable diffusion for real-time music generation. (2022). https://riffusion.com/about
[6]
Zhejing Hu, Yan Liu, Gong Chen, Sheng-hua Zhong, and Aiwei Zhang. 2020. Make your favorite music curative: Music style transfer for anxiety reduction. In Proceedings of the 28th ACM International Conference on Multimedia. 1189–1197.
[7]
Majeed Kazemitabaar, Jason McPeak, Alexander Jiao, Liang He, Thomas Outing, and Jon E. Froehlich. 2017. MakerWear: A Tangible Approach to Interactive Wearable Creation for Children. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (2017). https://api.semanticscholar.org/CorpusID:433240
[8]
Narjes Pourjafarian, Anusha Withana, Joseph A Paradiso, and Jürgen Steimle. 2019. Multi-Touch Kit: A do-it-yourself technique for capacitive multi-touch sensing using a commodity microcontroller. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1071–1083.
[9]
Christine Steinmeier, Dominic Becking, and Malte Kanders. 2022. The Perfect Musical Instrument Does Not Exist: Experience Reports for the Development of Accessible NIMEs. Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments (2022). https://api.semanticscholar.org/CorpusID:250429371
[10]
Maciej Tomczak, Carl Southall, and Jason Hockman. 2018. Audio style transfer with rhythmic constraints. In Proceedings of the 21st International Conference on Digital Audio Effects (DAFx-18).
[11]
Shih-Lun Wu and Yi-Hsuan Yang. 2023. MuseMorphose: Full-song and fine-grained piano music style transfer with one transformer VAE. IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023), 1953–1967.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST Adjunct '24: Adjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology
October 2024
394 pages
ISBN:9798400707186
DOI:10.1145/3672539
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 October 2024

Check for updates

Author Tags

  1. AI music
  2. accessibility
  3. wearable user interface

Qualifiers

  • Extended-abstract
  • Research
  • Refereed limited

Funding Sources

  • The Hong Kong University of Science and Technology (Guangzhou)

Conference

UIST '24

Acceptance Rates

Overall Acceptance Rate 355 of 1,733 submissions, 20%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 132
    Total Downloads
  • Downloads (Last 12 months)132
  • Downloads (Last 6 weeks)29
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media