skip to main content
10.1145/3422839.3423063acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Neural Style Transfer Based Voice Mimicking for Personalized Audio Stories

Published:12 October 2020Publication History

ABSTRACT

This paper demonstrates a CNN based neural style transfer on audio dataset to make storytelling a personalized experience by asking users to record a few sentences that are used to mimic their voice. User audios are converted to spectrograms, the style of which is transferred to the spectrogram of a base voice narrating the story. This neural style transfer is similar to the style transfer on images. This approach stands out as it needs a small dataset and therefore, also takes less time to train the model. This project is intended specifically for children who prefer digital interaction and are also increasingly leaving behind the storytelling culture and for working parents who are not able to spend enough time with their children. By using a parent's initial recording to narrate a given story, it is designed to serve as a conjunction between storytelling and screen-time to incorporate children's interest through the implicit ethical themes of the stories, connecting children to their loved ones simultaneously ensuring an innocuous and meaningful learning experience.

References

  1. [n.d.]. LibROSA¶. https://librosa.github.io/librosa/Google ScholarGoogle Scholar
  2. [n.d.]. Luka The Reading Companion for Kids. https://www.facebook.com/ worldofluka/Google ScholarGoogle Scholar
  3. Kuan Chen, Bo Chen, Jiahao Lai, and Kai Yu. 2018. High-quality Voice Conversion Using Spectrogram-Based WaveNet Vocoder.. In Interspeech. 1993--1997.Google ScholarGoogle Scholar
  4. Mireia Farrús Cabeceran, Michael Wagner, Daniel Erro Eslava, and Francisco Javier Hernando Pericás. 2010. Automatic speaker recognition as a measurement of voice imitation and conversion. The Intenational Journal of Speech. Language and the Law 1, 17 (2010), 119--142.Google ScholarGoogle Scholar
  5. Yang Gao, Rita Singh, and Bhiksha Raj. 2018. Voice impersonation using generative adversarial networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2506--2510.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. CoRR abs/1508.06576 (2015). arXiv:1508.06576 http://arxiv.org/abs/1508.06576Google ScholarGoogle Scholar
  7. Eric Grinstein, Ngoc Duong, Alexey Ozerov, and Patrick Pérez. 2018. Audio style transfer. https://arxiv.org/abs/1710.11385Google ScholarGoogle Scholar
  8. WIRED Insider. 2018. How Lyrebird Uses AI to Find Its (Artificial) Voice. https: //www.wired.com/brandlab/2018/10/lyrebird-uses-ai-find-artificial-voice/Google ScholarGoogle Scholar
  9. Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, et al. 2018. Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In Advances in neural information processing systems. 4480--4490.Google ScholarGoogle Scholar
  10. Younggun Lee, Taesu Kim, and Soo-Young Lee. 2018. Voice Imitating Text-toSpeech Neural Networks. arXiv preprint arXiv:1806.00927 (2018).Google ScholarGoogle Scholar
  11. Mazzzystar. 2019. mazzzystar/randomCNN-voice-transfer. https://github.com/ mazzzystar/randomCNN-voice-transferGoogle ScholarGoogle Scholar
  12. A. V. Oppenheim. 1970. Speech spectrograms using the fast Fourier transform. IEEE Spectrum 7, 8 (1970), 57--62.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Marco Pasini. 2019. Voice Translation and Audio Style Transfer with GANs. Medium (Nov 2019). https://towardsdatascience.com/voice-translation-andaudio-style-transfer-with-gans-b63d58f61854Google ScholarGoogle Scholar
  14. Hossein Salehghaffari. 2018. Speaker Verification using Convolutional Neural Networks. arXiv preprint arXiv:1803.05427 (2018).Google ScholarGoogle Scholar
  15. Hideyuki Tachibana, Katsuya Uenoyama, and Shunsuke Aihara. 2018. Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 4784--4788.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135 (2017).Google ScholarGoogle Scholar
  17. A. Zhang. 2017. Speech Recognition (Version 3.8) [Software]. PyPI https://pypi.org/project/SpeechRecognition/ (2017). https://pypi.org/project/SpeechRecognition/Google ScholarGoogle Scholar

Index Terms

  1. Neural Style Transfer Based Voice Mimicking for Personalized Audio Stories

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          AI4TV '20: Proceedings of the 2nd International Workshop on AI for Smart TV Content Production, Access and Delivery
          October 2020
          50 pages
          ISBN:9781450381468
          DOI:10.1145/3422839

          Copyright © 2020 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 October 2020

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Upcoming Conference

          MM '24
          MM '24: The 32nd ACM International Conference on Multimedia
          October 28 - November 1, 2024
          Melbourne , VIC , Australia

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader