ISCA Archive Interspeech 2019
ISCA Archive Interspeech 2019

Speaker Adversarial Training of DPGMM-Based Feature Extractor for Zero-Resource Languages

Yosuke Higuchi, Naohiro Tawara, Tetsunori Kobayashi, Tetsuji Ogawa

We propose a novel framework for extracting speaker-invariant features for zero-resource languages. A deep neural network (DNN)-based acoustic model is normalized against speakers via adversarial training: a multi-task learning process trains a shared bottleneck feature to be discriminative to phonemes and independent of speakers. However, owing to the absence of phoneme labels, zero-resource languages cannot employ adversarial multi-task (AMT) learning for speaker normalization. In this work, we obtain a posteriorgram from a Dirichlet process Gaussian mixture model (DPGMM) and utilize the posterior vector for supervision of the phoneme estimation in the AMT training. The AMT network is designed so that the DPGMM posteriorgram itself is embedded in a speaker-invariant feature space. The proposed network is expected to resolve the potential problem that the posteriorgram may lack reliability as a phoneme representation if the DPGMM components are intermingled with phoneme and speaker information. Based on the Zero Resource Speech Challenges, we conduct phoneme discriminant experiments on the extracted features. The results of the experiments show that the proposed framework extracts discriminative features, suppressing the variety in speakers.


doi: 10.21437/Interspeech.2019-2052

Cite as: Higuchi, Y., Tawara, N., Kobayashi, T., Ogawa, T. (2019) Speaker Adversarial Training of DPGMM-Based Feature Extractor for Zero-Resource Languages. Proc. Interspeech 2019, 266-270, doi: 10.21437/Interspeech.2019-2052

@inproceedings{higuchi19_interspeech,
  author={Yosuke Higuchi and Naohiro Tawara and Tetsunori Kobayashi and Tetsuji Ogawa},
  title={{Speaker Adversarial Training of DPGMM-Based Feature Extractor for Zero-Resource Languages}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={266--270},
  doi={10.21437/Interspeech.2019-2052}
}