Skip to main content

Speech Segregation Using an Event-synchronous Auditory Image and STRAIGHT

  • Chapter
Speech Separation by Humans and Machines

Conclusions

We have presented methods to segregate concurrent speech sounds using an auditory model and a vocoder. Specifically, the method involves the Auditory Image Model (AIM), a robust F0 estimator, and a synthesis module based either on STRAIGHT or an auditory synthesis filterbank. The event-synchronous procedure enhances the intelligibility of the target speaker in the presence of concurrent background speech. The resulting segregation performance is better than with conventional comb-filter methods whenever there are errors in fundamental frequency estimation as there always are in real concurrent speech. Test results suggest that this auditory segregation method has potential for speech enhancement in applications such as hearing aids.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Gold, B. and Rader, C. M., 1967, The Channel Vocoder, IEEE Trans. Audio and Electroacoustics, Vol. AU-15, 148–161.

    Google Scholar 

  • Irino, T., Patterson, R.D., and Kawahara, H., 2004, Speech segregation using an auditory vocoder with event-sychronous enhancements, Proc 18th International Congress on Acoustics (ICA 2004), vol. 4, pp. 3025–3028, Kyoto, Japan.

    Google Scholar 

  • Irino, T., Patterson, R.D., and Kawahara, H., 2003a, Speech segregation using event synchronous auditory vocoder, in Proc. IEEE ICASSP 2003, Hong Kong.

    Google Scholar 

  • Irino, T., Patterson, R.D., and Kawahara, H., 2003b, Speech segregation based on fundamental event information using an auditory vocoder, Proc. 8th Euro Conf. on Speech Comm. and Tech. (Eurospeech 2003, (Interspeech 2003)), 553–556, Geneva, Switzerland.

    Google Scholar 

  • Kawahara, H., Masuda-Katsuse, I., and de Cheveigne, A., 1999, Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: Possible role of a repetitive structure in sounds, Speech Comm, 27, 187–207.

    Google Scholar 

  • Lim, J.S., Oppenheim, A.V., and Braida, L.D., 1978, Evaluation of an adaptive comb filtering method for enhancing speech degraded by white noise addition, IEEE, Trans. ASSP, ASSP-26, 354–358.

    Google Scholar 

  • Nakatani, T. and Irino, T., 2002, Robust fundamental frequency estimation against background noise and spectral distortion, ICSLP 2002, 1733–1736, Denver, Colorado.

    Google Scholar 

  • Parsons, T.W., 1976, Separation of speech from interfering speech by means of harmonic selection, J. Acoust. Soc. Am., 60, 911–918.

    Article  Google Scholar 

  • Patterson, R.D., Allerhand, M., and Giguere, C., 1995, Time-domain modelling of peripheral auditory processing: a modular architecture and a software platform, J. Acoust. Soc. Am., 98, 1890–1894. http://www.mrc-cbu.cam.ac.uk/cnbh/

    Article  Google Scholar 

  • Patterson, R.D., Robinson, K., Holdsworth, J., McKeown, D., Zhang, C., and Allerhand, M., 1992, Complex sounds and auditory images, In: Auditory physiology and perception, Proc. of the 9h Internat. Symposium on Hearing, Y. Cazals, L. Demany, K. Horner (eds), Pergamon, Oxford, 429–446.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer Science + Business Media, Inc.

About this chapter

Cite this chapter

Irino, T., Patterson, R.D., Kawakhara, H. (2005). Speech Segregation Using an Event-synchronous Auditory Image and STRAIGHT. In: Divenyi, P. (eds) Speech Separation by Humans and Machines. Springer, Boston, MA. https://doi.org/10.1007/0-387-22794-6_10

Download citation

  • DOI: https://doi.org/10.1007/0-387-22794-6_10

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4020-8001-2

  • Online ISBN: 978-0-387-22794-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics