Abstract:
Many state-of-the-art multichannel speech enhancement methods rely on second-order statistics of the desired speech signal, the noise signal, or both. Estimation of those...Show MoreMetadata
Abstract:
Many state-of-the-art multichannel speech enhancement methods rely on second-order statistics of the desired speech signal, the noise signal, or both. Estimation of those are difficult in practice, resulting in a practical performance that is typically much lower than their potential theoretical performance. We propose two multichannel enhancement techniques that instead rely on a model for voiced speech. That is, the proposed methods are driven by the signals' fundamental frequencies, which may be accurately estimated even in noisy scenarios. The first method is designed independently of the microphone array geometry and source position, whereas these are utilized in the second approach. Thereby, we can investigate when to exploit such information in the case of localization errors and violations of the spatial assumptions. Numerical results show that the proposed method is able to outperform competing methods in terms of both output SNRs and PESQ scores.
Published in: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 05-09 March 2017
Date Added to IEEE Xplore: 19 June 2017
ISBN Information:
Electronic ISSN: 2379-190X