skip to main content
10.1145/3581783.3613765acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

RD-FGFS: A Rule-Data Hybrid Framework for Fine-Grained Footstep Sound Synthesis from Visual Guidance

Published: 27 October 2023 Publication History

Abstract

Existing methods are difficult to synthesize fine-grained footsteps based on video frames only. This is due to the complicated nonlinear mapping relationships between motion states, spatial locations and different footstep sounds. Aiming to address this issue, we propose a Rule-Data guided Fine-Grained Footstep Sound (RD-FGFS) synthesis method. To the best of our knowledge, our work takes the first step in integrating data-driven and rule modeling approaches for visually aligned footstep sound synthesis. Firstly, we design a learning-based footstep sound generation network (FSGN) architecture driven by pose and flow features. The FSGN is proposed for generating an initial target sound which captures timing cues. Secondly, a rule-based fine-grained footstep sound adjustment (FGFSA) method is designed based on the visual guidance, namely ground material, movement type, and displacement distance. The proposed FGFSA effectively constructs a mapping relationship between different visual cues and footstep sounds, enabling fine-grained variations of footstep sounds. Experimental results show that our method improves the visual and sound synchronization results of footsteps and achieves impressive performance in footstep sound fine-grained control.

Supplemental Material

MP4 File
Presentation video

References

[1]
Mohammed Habibullah Baig, Jibin Rajan Varghese, and Zhangyang Wang. 2018. MusicMapp: A deep learning based solution for music exploration and visual interaction. In Proceedings of the ACM International Conference on Multimedia. 1253--1255.
[2]
Marc Cardle, Stephen Brooks, Ziv Bar-Joseph, and Peter Robinson. 2003. Sound-by-numbers: Motion-driven sound synthesis. In Proceedings of the ACM SIGGRAPH Symposium on Computer Animation. 349--356.
[3]
Kan Chen, Chuanxi Zhang, Chen Fang, Zhaowen Wang, Trung Bui, and Ram Nevatia. 2018. Visually indicated sound generation by perceptually optimized classification. In Proceedings of the European Conference on Computer Vision Workshops. 0--0.
[4]
Lele Chen, Sudhanshu Srivastava, Zhiyao Duan, and Chenliang Xu. 2017. Deep cross-modal audio-visual generation. In Proceedings of the on Thematic Workshops of ACM Multimedia. 349--357.
[5]
Peihao Chen, Yang Zhang, Mingkui Tan, Hongdong Xiao, Deng Huang, and Chuang Gan. 2020. Generating visually aligned sound from videos. IEEE Transactions on Image Processing (2020).
[6]
Perry R Cook. 1997. Physically informed sonic modeling (phism): Synthesis of percussive sounds. Computer Music Journal (1997).
[7]
Perry R Cook. 2002. Modeling Bill's gait: Analysis and parametric synthesis of walking sounds. In Proceedings of the Audio Engineering Society Conference on Virtual, Synthetic, and Entertainment Audio.
[8]
Andy James Farnell and Obiwannabe Uk. 2007. Marching onwards: Procedural synthetic footsteps for video games and animation. In Proceedings of The Pure Data Convention.
[9]
Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In Proceedings of the IEEE International Conference on Computer Vision. 6202--6211.
[10]
Federico Fontana and Roberto Bresin. 2003. Physics-based sound synthesis and control: Crushing, walking and running by crumpling sounds. In Proceedings of the Colloquium on Musical Informatics. 109--114.
[11]
Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. 776--780.
[12]
Sanchita Ghose and John Jeffrey Prevost. 2020. Autofoley: Artificial synthesis of synchronized sound tracks for silent videos with deep learning. IEEE Transactions on Multimedia (2020).
[13]
Sanchita Ghose and John J Prevost. 2022. Foleygan: Visually guided generative adversarial network-based synchronous sound generation in silent videos. IEEE Transactions on Multimedia (2022).
[14]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Commun. ACM (2020).
[15]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation (1997).
[16]
Vladimir Iashin and Esa Rahtu. 2021. Taming visually guided sound generation. In Proceedings of the British Machine Vision Conference.
[17]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning. 448--456.
[18]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1125--1134.
[19]
Angelika C Kern and Wolfgang Ellermeier. 2020. Audio in VR: Effects of a soundscape and movement-triggered step sounds on presence. Frontiers in Robotics and AI (2020).
[20]
Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. 2020. Vibe: Video inference for human body pose and shape estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5253--5263.
[21]
Shiguang Liu, Haonan Cheng, and Yiying Tong. 2019. Physically-based statistical simulation of rain sound. ACM Transactions on Graphics (2019).
[22]
Shiguang Liu and Si Gao. 2020. Automatic synthesis of explosion sound synchronized with animation. Virtual Reality (2020).
[23]
Shiguang Liu, Sijia Li, and Haonan Cheng. 2022. Towards an end-to-end visual-to-raw-audio generation with gan. IEEE Transactions on Circuits and Systems for Video Technology (2022).
[24]
Xin Ma, Wei Zhong, Long Ye, and Qin Zhang. 2022. Visually aligned sound generation via sound-producing motion parsing. Neurocomputing (2022).
[25]
Damián Marelli, Mitsuko Aramaki, Richard Kronland-Martinet, and Charles Verron. 2010. Time-frequency synthesis of noisy sounds with narrow spectral components. IEEE Transactions on Audio, Speech, and Language Processing (2010).
[26]
Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. 2017. SampleRNN: An unconditional end-to-end neural audio generation model. In Proceedings of the International Conference on Learning Representations.
[27]
Rolf Nordahl, Luca Turchet, and Stefania Serafin. 2011. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications. IEEE Transactions on Visualization and Computer Graphics (2011).
[28]
Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman. 2016. Visually indicated sounds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2405--2413.
[29]
Leevi Peltola, Cumhur Erkut, Perry R Cook, and Vesa Valimaki. 2007. Synthesis of hand clapping sounds. IEEE Transactions on Audio, Speech, and Language Processing (2007).
[30]
Eston Schweickart, Doug L James, and Steve Marschner. 2017. Animating elastic rods with sound. ACM Transactions on Graphics (2017).
[31]
Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. 2018. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. 4779--4783.
[32]
Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).
[33]
Auston Sterling and Ming C Lin. 2016. Interactive modal sound synthesis using generalized proportional damping. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. 79--86.
[34]
Tapio Takala and James Hahn. 1992. Sound rendering. In Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques. 211--220.
[35]
Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning. 6105--6114.
[36]
Luca Turchet 2016. Footstep sounds synthesis: Design, implementation, and evaluation of foot--floor interactions, surface materials, shoe types, and walkers' features. Applied Acoustics (2016).
[37]
Luca Turchet, Stefania Serafin, Smilen Dimitrov, and Rolf Nordahl. 2010. Conflicting audio-haptic feedback in physically based simulation of walking sounds. In Proceedings of the Haptic and Audio Interaction Design. 97--106.
[38]
Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. In Proceedings of the 9th ISCA Speech Synthesis Workshop. 125--125.
[39]
Yujia Wang, Wei Liang, Wanwan Li, Dingzeyu Li, and Lap-Fai Yu. 2020. Scene-aware background music synthesis. In Proceedings of the ACM International Conference on Multimedia. 1162--1170.
[40]
David Lewis Yewdall. 2012. Foley: The art of footsteps, props, and cloth movement. In Practical Art of Motion Picture Sound. Routledge, 402--439.
[41]
Zechen Zhang, Nikunj Raghuvanshi, John Snyder, and Steve Marschner. 2019. Acoustic texture rendering for extended sources in complex scenes. ACM Transactions on Graphics (2019).
[42]
Yipin Zhou, Zhaowen Wang, Chen Fang, Trung Bui, and Tamara L Berg. 2018. Visual to sound: Generating natural sound for videos in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3550--3558.

Cited By

View all
  • (2024)Dance-to-Music Generation with Encoder-based Textual InversionSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687562(1-11)Online publication date: 3-Dec-2024

Index Terms

  1. RD-FGFS: A Rule-Data Hybrid Framework for Fine-Grained Footstep Sound Synthesis from Visual Guidance

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. footstep sound
      2. procedural audio
      3. rule-data hybrid framework
      4. sound synthesis
      5. visual guidance

      Qualifiers

      • Research-article

      Funding Sources

      • Natural Science Foundation of China

      Conference

      MM '23
      Sponsor:
      MM '23: The 31st ACM International Conference on Multimedia
      October 29 - November 3, 2023
      Ottawa ON, Canada

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)30
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Dance-to-Music Generation with Encoder-based Textual InversionSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687562(1-11)Online publication date: 3-Dec-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media