skip to main content
10.1145/3595916.3626350acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
demonstration

TelEmoScatter: Enabling Remote Interaction and Emotional Connections in Virtual and Physical Music Performance

Published: 01 January 2024 Publication History

Abstract

To enrich the emotional experiences of virtual reality (VR) online audiences in music performances, we developed TelEmoScatter, a system that facilitates remote interaction between music performers and onsite audiences. Our system also fosters emotional connections for online audiences through sound-visualization conversion, which is influenced by the state of the onsite audiences using computer vision techniques. In this work, we generate a 3D space using real-time sound-visualization techniques by converting MIDI signals from musical instruments into dynamic animations. Additionally, we employ video analysis to predict the emotions of the onsite audience, allowing seamless integration of emotional visual cues into the virtual scene. With our system, users can effortlessly immerse themselves in the emotional expressions of performers through music and experience the unique atmosphere of a live performance venue simply by wearing a VR headset.

References

[1]
Matthew N Bain. 2008. Real time music visualization: A study in the visual extension of music. Ph. D. Dissertation. The Ohio State University.
[2]
Chin-Han Chen, Ming-Fang Weng, Shyh-Kang Jeng, and Yung-Yu Chuang. 2008. Emotion-based music visualization using photos. In Advances in Multimedia Modeling: 14th International Multimedia Modeling Conference, MMM 2008, Kyoto, Japan, January 9-11, 2008. Proceedings 14. Springer, 358–368.
[3]
Lung-Pan Cheng, Sebastian Marwecki, and Patrick Baudisch. 2017. Mutual human actuation. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 797–805.
[4]
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arxiv:1704.04861 [cs.CV]
[5]
Glenn Jocher, Ayush Chaurasia, Alex Stoken, Jirka Borovec, NanoCode012, Yonghye Kwon, Kalen Michael, TaoXie, Jiacong Fang, imyhxy, Lorna, Zeng Yifu, Colin Wong, Abhiram V, Diego Montes, Zhiqiang Wang, Cristi Fati, Jebastin Nadar, Laughing, UnglvKitDe, Victor Sonck, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Dhruv Nair, Max Strobel, and Mrinal Jain. 2022. ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation. https://doi.org/10.5281/zenodo.7347926
[6]
Arto Lehtiniemi and Jukka Holm. 2012. Using animated mood pictures in music recommendation. In 2012 16th International Conference on Information Visualisation. IEEE, 143–150.
[7]
Hugo B Lima, Carlos GR Dos Santos, and Bianchi S Meiguins. 2021. A survey of music visualization techniques. ACM Computing Surveys (CSUR) 54, 7 (2021), 1–29.
[8]
Purva Tendulkar, Abhishek Das, Aniruddha Kembhavi, and Devi Parikh. 2020. Feel the music: Automatically generating a dance for an input song. arXiv preprint arXiv:2006.11905 (2020).
[9]
Tai-Chen Tsai, Tse-Yu Pan, Min-Chun Hu, and Ya-Lun Tao. 2022. StimulusLoop: Game-Actuated Mutuality Artwork for Evoking Affective State. In Proceedings of the 30th ACM International Conference on Multimedia. 7245–7247.
[10]
Monu Verma, Santosh Kumar Vipparthi, and Girdhari Singh. 2021. AffectiveNet: Affective-Motion Feature Learning for Microexpression Recognition. IEEE MultiMedia 28, 1 (2021), 17–27. https://doi.org/10.1109/MMUL.2020.3021659

Index Terms

  1. TelEmoScatter: Enabling Remote Interaction and Emotional Connections in Virtual and Physical Music Performance

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MMAsia '23: Proceedings of the 5th ACM International Conference on Multimedia in Asia
    December 2023
    745 pages
    ISBN:9798400702051
    DOI:10.1145/3595916
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 January 2024

    Check for updates

    Author Tags

    1. Emotion Affection
    2. Music Visualization
    3. Sound-Visualization

    Qualifiers

    • Demonstration
    • Research
    • Refereed limited

    Funding Sources

    Conference

    MMAsia '23
    Sponsor:
    MMAsia '23: ACM Multimedia Asia
    December 6 - 8, 2023
    Tainan, Taiwan

    Acceptance Rates

    Overall Acceptance Rate 59 of 204 submissions, 29%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 85
      Total Downloads
    • Downloads (Last 12 months)54
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media