ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Self-Supervised Learning with Multi-Target Contrastive Coding for Non-Native Acoustic Modeling of Mispronunciation Verification

Longfei Yang, Jinsong Zhang, Takahiro Shinozaki

Non-native mispronunciation verification is an important component in computer-aided language learning (CALL) systems. However, the data sparsity problem makes it difficult to establish an accurate acoustic model directly on non-native data with supervised approaches since it is impractical to collect and manually label a large amount of non-native speech data. In this paper, we propose a pre-training approach based on self-supervised learning with multi-target contrastive coding utilizing plenty of raw resources of two native languages for non-native acoustic modeling of mispronunciation verification. In our work, the model is designed to learn the representations of discrepancy with respect to phonetic structures in and across different languages, and speakers by making predictions that are contrastive to different targets. In addition, an additional term is incorporated as a regularization term by reconstructing the original speech from the shared components. Through the experiments on the Japanese part of the BLCU inter-Chinese speech corpus, results show that our proposed approaches are effective to improve the performance for the non-native acoustic modeling of phone recognition and mispronunciation verification.


doi: 10.21437/Interspeech.2022-207

Cite as: Yang, L., Zhang, J., Shinozaki, T. (2022) Self-Supervised Learning with Multi-Target Contrastive Coding for Non-Native Acoustic Modeling of Mispronunciation Verification. Proc. Interspeech 2022, 4312-4316, doi: 10.21437/Interspeech.2022-207

@inproceedings{yang22c_interspeech,
  author={Longfei Yang and Jinsong Zhang and Takahiro Shinozaki},
  title={{Self-Supervised Learning with Multi-Target Contrastive Coding for Non-Native Acoustic Modeling of Mispronunciation Verification}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={4312--4316},
  doi={10.21437/Interspeech.2022-207}
}