ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Prototypical speaker-interference loss for target voice separation using non-parallel audio samples

Seongkyu Mun, Dhananjaya Gowda, Jihwan Lee, Changwoo Han, Dokyun Lee, Chanwoo Kim

In this paper, we propose a new prototypical loss function for training neural network models for target voice separation. Conventional methods use paired parallel audio samples of the target speaker with and without an interfering speaker or noise, and minimize the spectrographic mean squared error (MSE) between the clean and enhanced target speaker audio. Motivated by the use of contrastive loss in speaker recognition task, we had earlier proposed a speaker representation loss that uses representative samples from the target speaker in addition to the conventional MSE loss. In this work, we propose a prototypical speaker-interference (PSI) loss, that makes use of representative samples from the target speaker, interfering speaker as well as the interfering noise to better utilize any non-parallel data that may be available. The performance of the proposed loss function is evaluated using VoiceFilter, a popular framework for target voice separation. Experimental results show that the proposed PSI loss significantly improves the PESQ scores of the enhanced target speaker audio.


doi: 10.21437/Interspeech.2022-11236

Cite as: Mun, S., Gowda, D., Lee, J., Han, C., Lee, D., Kim, C. (2022) Prototypical speaker-interference loss for target voice separation using non-parallel audio samples. Proc. Interspeech 2022, 276-280, doi: 10.21437/Interspeech.2022-11236

@inproceedings{mun22_interspeech,
  author={Seongkyu Mun and Dhananjaya Gowda and Jihwan Lee and Changwoo Han and Dokyun Lee and Chanwoo Kim},
  title={{Prototypical speaker-interference loss for target voice separation using non-parallel audio samples}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={276--280},
  doi={10.21437/Interspeech.2022-11236}
}