skip to main content
10.1145/3638884.3638978acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccipConference Proceedingsconference-collections
research-article

Pipa Performance Generation Based on Pre-trained Temporally Guided Network

Published:23 April 2024Publication History

ABSTRACT

In this paper, we propose a method that takes the audio of a solo pipa performance as input and utilizes pre-trained models of other instruments to enhance the pipa's 3D skeleton motion performance. This approach aims to address the limited availability of training data. The key idea is that we believe that all instruments exhibit similar motion trends in their large limbs while playing. Therefore, we can utilize videos of other instruments playing to pre-train and capture the motion trends of the large limbs. Subsequently, we can fine-tune the model using videos of the pipa solo, thereby addressing the issue of limited data for the pipa solo. We utilize a combination of LSTM network, U-net architecture and self-attention mechanism to validate the effectiveness of instrumental pre-training using videos of pipa solos on the Internet.

References

  1. Eli Shlizerman, Lucio Dery, Hayden Schoen, and Ira Kemelmacher-Shlizerman. 2018. Audio to Body Dynamics. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7574–7583.https://doi.org/10.1109/CVPR.2018.00790Google ScholarGoogle ScholarCross RefCross Ref
  2. Hsuan-Kai Kao and Li Su. 2020. Temporally Guided Music-to-Body-Movement Generation. In Proceedings of the 28th ACM International Conference on Multimedia (MM '20). Association for Computing Machinery, New York, NY, USA, 147–155. https://doi.org/10.1145/3394171.3413848Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. 2023. Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10544–10553. https://doi.org/10.1109/CVPR52729.2023.01016Google ScholarGoogle ScholarCross RefCross Ref
  4. Zhuoran Zhao, Jinbin Bai, Delong Chen, Debang Wang, and Yubo Pan. 2023. Taming Diffusion Models for Music-driven Conducting Motion Generation.Google ScholarGoogle Scholar
  5. Delong Chen, Fan Liu, Zewen Li, and Feng Xu. 2021. VirtualConductor: Music-driven Conducting Video Generation System.Google ScholarGoogle Scholar
  6. Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. 2021. AI Choreographer: Music Conditioned 3D Dance Generation with AIST++. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 13381–13392. https://doi.org/10.1109/ICCV48922.2021.01315Google ScholarGoogle ScholarCross RefCross Ref
  7. Andreas Aristidou, Anastasios Yiannakidis, Kfir Aberman, Daniel Cohen-Or, Ariel Shamir, and Yiorgos Chrysanthou. 2023. Rhythm is a Dancer: Music-Driven Motion Synthesis With Global Structure. IEEE Transactions on Visualization and Computer Graphics 29, 8 (2023), 3519–3534. https://doi.org/10.1109/TVCG.2022.3163676Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Wenlin Zhuang, Congyi Wang, Jinxiang Chai, Yangang Wang, Ming Shao, and Siyu Xia. 2022. Music2Dance: DanceNet for Music-Driven Dance Generation. ACM Trans. Multimedia Comput. Commun. Appl. 18, 2, Article 65 (May 2022), 21 pages. https://doi.org/10.1145/3485664Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Li Buyu, Zhao Yongchi, Zhelun, Shi,Sheng Lu. 2022. DanceFormer: Music Conditioned 3D Dance Generation with Parametric Motion Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1272-1279. https://doi.org/10.1609/aaai.v36i2.20014Google ScholarGoogle ScholarCross RefCross Ref
  10. Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu. 2022. Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11040–11049. https://doi.org/10.1109/CVPR52688.2022.01077Google ScholarGoogle ScholarCross RefCross Ref
  11. Yuancheng Wang, Yuyang Jing, Wei Wei, Dorian Cazau, Olivier Adam, and Qiao Wang. 2022. PipaSet and TEAS: A Multimodal Dataset and Annotation Platform for Automatic Music Transcription and Expressive Analysis Dedicated to Chinese Traditional Plucked String Instrument Pipa. IEEE Access 10, (2022), 113850–113864. https://doi.org/10.1109/ACCESS.2022.3216282Google ScholarGoogle ScholarCross RefCross Ref
  12. Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1302–1310. https://doi.org/10.1109/CVPR.2017.143Google ScholarGoogle ScholarCross RefCross Ref
  13. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation.Google ScholarGoogle Scholar
  14. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention Is All You Need.Google ScholarGoogle Scholar

Index Terms

  1. Pipa Performance Generation Based on Pre-trained Temporally Guided Network

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICCIP '23: Proceedings of the 2023 9th International Conference on Communication and Information Processing
      December 2023
      648 pages
      ISBN:9798400708909
      DOI:10.1145/3638884

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 23 April 2024

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate61of301submissions,20%
    • Article Metrics

      • Downloads (Last 12 months)5
      • Downloads (Last 6 weeks)5

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format