Semi-supervised and cross-lingual knowledge transfer learnings are two strategies for boosting performance of low-resource speech recognition systems. In this paper, we propose a unified knowledge transfer learning method to deal with these two learning tasks. Such a knowledge transfer learning is realized by fine-tuning of Deep Neural Network (DNN). We demonstrate its effectiveness in both monolingual based semi-supervised learning task and cross-lingual knowledge transfer learning task. We then combine these two learning strategies to obtain further performance improvement.
Cite as: Xu, H., Su, H., Ni, C., Xiao, X., Huang, H., Chng, E.S., Li, H. (2016) Semi-Supervised and Cross-Lingual Knowledge Transfer Learnings for DNN Hybrid Acoustic Models Under Low-Resource Conditions. Proc. Interspeech 2016, 1315-1319, doi: 10.21437/Interspeech.2016-1099
@inproceedings{xu16_interspeech, author={Haihua Xu and Hang Su and Chongjia Ni and Xiong Xiao and Hao Huang and Eng Siong Chng and Haizhou Li}, title={{Semi-Supervised and Cross-Lingual Knowledge Transfer Learnings for DNN Hybrid Acoustic Models Under Low-Resource Conditions}}, year=2016, booktitle={Proc. Interspeech 2016}, pages={1315--1319}, doi={10.21437/Interspeech.2016-1099} }