Abstract
Spoken language recognition is the task of automatically determining the identity of the language spoken in a speech clip. Prior approaches to spoken language recognition have been able to accurately determine the language within an audio clip. However, they usually require long training time and large datasets since most of the existing approaches heavily rely on phonotactic, acoustic-phonetic and prosodic information. Moreover, the features extracted may not be linguistic features, but speaker features instead. This paper presents a novel approach based on a linguistics perspective, particularly that of syllable structure. Based on human listening experiments, there has been strong evidence that syllable structure is a significant knowledge source in human spoken language recognition. The approach includes a block for labelling common syllable structures (CV, CVC, VC, etc.). Then, a long short-term memory (LSTM) network is used to transform the Mel-frequency cepstral coefficients (MFCC) of an audio clip to its syllable structure, thereby diminishing the influence of speakers on extracted features and reducing the number of dimensions for the final language predictor. The array of syllables is then passed through the second LSTM network to predict the language. The proposed method creates a generalized and scalable framework with acceptable accuracy for spoken language recognition. Our experiments with 10 different languages demonstrate the feasibility of the proposed approach, which achieves a comparable accuracy of 70.40% with a computing time of 37 ms for every second of speech, outperforming most of the existing methods based on acoustic-phonetic and phonotactic features by efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Anderson-Hsieh, J., Johnson, R., Koehler, K.: The relationship between native speaker judgments of nonnative pronunciation and deviance in segmentais, prosody, and syllable structure. Lang. Learn. 42(4), 529–555 (1992)
Brümmer, N., Strasheim, A., Hubeika, V., Matějka, P., Burget, L., Glembek, O.: Discriminative acoustic language recognition via channel-compensated GMM statistics. In: Tenth Annual Conference of the International Speech Communication Association (2009)
Edmondson, W.H., Zhang, L.: The use of syllable structure for speech recognition. Cognitive Science Research Papers-University of Birmingham CSRP (2002)
Gelly, G., Gauvain, J.: Spoken language identification using LSTM-based angular proximity. In: Proceedings of Interspeech (2017)
Gonzalez-Dominguez, J., Lopez-Moreno, I., Sak, H., Gonzalez-Rodriguez, J., Moreno, P.J.: Automatic language identification using long short-term memory recurrent neural networks. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)
Kockmann, M., Ferrer, L., Burget, L., Černockỳ, J.: iVector fusion of prosodic and cepstral features for speaker verification. In: Twelfth Annual Conference of the International Speech Communication Association (2011)
Kozea: Pyphen (2017). https://github.com/Kozea/Pyphen
larsyencken: Wide language index. https://github.com/larsyencken/wide-language-index (2017). Accessed 05 June 2018
Maïonchi-Pino, N., Magnan, A., Écalle, J.: Syllable frequency effects in visual word recognition: developmental approach in French children. J. Appl. Dev. Psychol. 31(1), 70–82 (2010)
Mortensen, D.R., Dalmia, S., Littell, P.: Epitran: precision G2P for many languages. In: LREC (2018)
Ng, R.W., Lee, T., Leung, C.C., Ma, B., Li, H.: Spoken language recognition with prosodic features. IEEE Trans. Audio Speech Lang. Process. 21(9), 1841–1853 (2013)
Tatoeba: Tatoeba. https://tatoeba.org/eng. Accessed 04 Apr 2018
Tong, R., Ma, B., Li, H., Chng, E.S.: A target-oriented phonotactic front-end for spoken language recognition. IEEE Trans. Audio Speech Lang. Process. 17(7), 1335–1347 (2009)
Voxforge.org: Free speech... recognition (linux, windows and mac) - voxforge.org. http://www.voxforge.org/. Accessed 05 June 2018
Walker, B.D., Lackey, B.C., Muller, J., Schone, P.J.: Language-reconfigurable universal phone recognition. In: Eighth European Conference on Speech Communication and Technology (2003)
Walker, W., et al.: Sphinx-4: a flexible open source framework for speech recognition. SML Technical report (2004)
Zamuner, T.S., Kharlamov, V.: Phonotactics and syllable structure in infant speech perception. In: Oxford Handbook of Developmental Linguistics, pp. 27–42 (2016)
Zissman, M.A.: Comparison of four approaches to automatic language identification of telephone speech. IEEE Trans. Speech Audio Process. 4(1), 31 (1996)
Acknowledgments
The authors wish to thank Tatoeba and all speakers affiliated for supplying audio speech samples. We also thank members of the NTU MirLab and Ting-Yuan Cheng for their support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Lee, RH.A., Jang, JS.R. (2018). A Syllable Structure Approach to Spoken Language Recognition. In: Dutoit, T., Martín-Vide, C., Pironkov, G. (eds) Statistical Language and Speech Processing. SLSP 2018. Lecture Notes in Computer Science(), vol 11171. Springer, Cham. https://doi.org/10.1007/978-3-030-00810-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-00810-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00809-3
Online ISBN: 978-3-030-00810-9
eBook Packages: Computer ScienceComputer Science (R0)