Skip to main content

Advertisement

Log in

Feature Extraction Methods in Language Identification: A Survey

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Language Identification (LI) is one of the widely emerging field in the areas of speech processing to accurately identify the language from the data base based on some features of the speech signal. LI technologies have a wide set of applications in different spheres due to the growing advancement in the field of artificial intelligence and machine learning. Feature extraction is one of the fundamental and significant process performed in LI. This review presents main paradigms of research in Feature Extraction methods that will provide a deep insight to the researchers about the feature extraction techniques for future studies in LI. Broadly, this review summarizes and compare various feature extraction approaches with and without noise compensation techniques as the current trend is towards robust universal Language Identification framework. This paper categorizes the different feature extraction approaches on the basis of different features, human speech production system/peripheral auditory system, spectral or cepstral analysis, and lastly on the basis of transform. Moreover, the different noise compensation-based feature extraction techniques are also covered in the review. This paper also presents, that Mel-Frequency Cepstral Coefficients (MFCCs) are the most popular approach. Results indicates that MFCC fused with other feature vectors and cleansing approaches gives improved performance as compared to the pure MFCC based Feature Extraction approaches. This study also describes the different categories at the front end of the LI system from research point of view.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Aggarwal, G., & Singh, L. (2018). Classification of intellectual disability using LPC, LPCC, and WLPCC parameterization techniques. International Journal of Computers and Applications, 1–10.

  2. Agrawal, P., & Ganapathy, S. (2017). Speech representation learning using unsupervised data-driven modulation filtering for robust ASR. In Proceedings of the Interspeech (pp. 2446–2450).

  3. Al-Ali, A. K. H., Dean, D., Senadji, B., Chandran, V., & Naik, G. R. (2017). Enhanced forensic speaker verification using a combination of DWT and MFCC feature warping in the presence of noise and reverberation conditions. IEEE Access, 5, 15400–15413.

    Article  Google Scholar 

  4. Alonso, J. B., et al. (2017). Automatic anuran identification using noise removal and audio activity detection. Expert Systems with Applications, 73, 83–92.

    Article  Google Scholar 

  5. Ambikairajah, E., et al. (2011). Language identification: A tutorial. IEEE Circuits and Systems Magazine, 11(2), 82–108.

    Article  Google Scholar 

  6. Atal, B. S. (1976). Automatic recognition of speakers from their voices. Proceedings of the IEEE, 64(4), 460–475.

    Article  Google Scholar 

  7. Besacier, L., Barnard, E., Karpov, A., & Schultz, T. (2014). Automatic speech recognition for under resourced languages: A survey. Speech Communication, 56, 85–100.

    Article  Google Scholar 

  8. Bharali, S. S., & Kalita, S. K. (2015). A comparative study of different features for isolated spoken word recognition using HMM with reference to Assamese language. International Journal of Speech Technology, 18(4), 673–684.

    Article  Google Scholar 

  9. Bharali, S. S., & Kalita, S. K. (2018). Speech recognition with reference to Assamese language using novel fusion technique. International Journal of Speech Technology, 21, 1–13.

    Article  Google Scholar 

  10. Bharti, S. S., Gupta, M., & Agarwal, S. (2018). Background noise identification system based on random forest for speech. In International conference on intelligent computing and applications (pp. 323–332). Springer, Singapore.

  11. Bielefeld, B. (1994). Language identification using shifted delta cepstrum. In Fourteenth annual speech research symposium.

  12. Boll, S. (1979). Suppression of acoustic noise in speech using spectral subtraction. IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(2), 113–120.

    Article  Google Scholar 

  13. Borde, P., Varpe, A., Manza, R., & Yannawar, P. (2015). Recognition of isolated words using Zernike and MFCC features for audio visual speech recognition. International Journal of Speech Technology, 18(2), 167–175.

    Article  Google Scholar 

  14. Botha, G. R., & Etienne, B. (2012). Factors that affect the accuracy of text based language identification. Computer Speech and Language, 26, 302–320.

    Article  Google Scholar 

  15. Clemente, I. A., Heckmann, M., & Wrede, B. (2012). Incremental word learning: Efficient hmm initialization and large margin discriminative adaptation. Speech Communication, 54(9), 1029–1048.

    Article  Google Scholar 

  16. Das, R. K., & Prasanna, S. M. (2018). Speaker verification from short utterance perspective: a review. IETE Technical Review35(6), 599–617.

    Article  Google Scholar 

  17. Dehak, N., et al. (2011). Front end factor analysis for speaker verification. IEEE Transactions Audio, Speech, Language Processing, 19(4), 788–798.

    Article  Google Scholar 

  18. Dehak, N., Torres-Carrasquillo, P. A., Reynolds, D., & Dehak, R. (2011). Language recognition via i-vectors and dimensionality reduction. In Twelfth annual conference of the international speech communication association.

  19. Dey, S., Rajan, R., Padmanabhan, R., & Murthy, H. A. (2011). Feature diversity for emotion, language and speaker verification. In National conference on communications (NCC), IEEE (pp. 1–5).

  20. Dişken, G., et al. (2017). A review on feature extraction for speaker recognition under degraded conditions. IETE Technical Review, 3(34), 321–332.

    Article  Google Scholar 

  21. Dustor, A., & Szwarc, P. (2010). Spoken language identification based on GMM models. In International conference in signals and electronic systems (ICSES), IEEE (pp. 105–108).

  22. Dutta, S. K., & Singh, L. J. (2018). A comparison of three spectral features for phone recognition in sub-optimal environments. International Journal of Applied Pattern Recognition, 5(2), 137–148.

    Article  Google Scholar 

  23. Echeverry-Correa, J., et al. (2015). Topic identification techniques applied to dynamic language model adaptation for automatic speech recognition. Expert System with Applications, 42(1), 101–112.

    Article  Google Scholar 

  24. El-Fattah, M. A. A., et al. (2014). Speech enhancement with an adaptive Wiener filter. International Journal of Speech Technology, 17(1), 53–64.

    Article  Google Scholar 

  25. Ephraim, Y., & Malah, D. (1984). Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(6), 1109–1121.

    Article  Google Scholar 

  26. Fernando, S., Sethu, V., Ambikairajah, E., & Epps, J. (2017). Bidirectional modelling for short duration language identification. In Interspeech (pp. 2809–2813).

  27. Ferrer, L., Lei, Y., McLaren, M., & Scheffer, N. (2016). Study of senone-based deep neural network approaches for spoken language recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(1), 105–116.

    Article  Google Scholar 

  28. Fer, R., et al. (2017). Multilingually trained bottleneck features in spoken language recognition. Computer Speech and Language, 46, 252–267.

    Article  Google Scholar 

  29. Gehring, J., Miao, Y., Metze, F., & Waibel, A. (2013). Extracting deep bottleneck features using stacked auto-encoders. In IEEE international conference in acoustics, speech and signal processing (ICASSP), IEEE (pp. 3377–3381).

  30. Gelly, G., Gauvain, J. L., Le, V. B., & Messaoudi, A. (2016). A divide-and-conquer approach for language identification based on recurrent neural networks. In INTERSPEECH (pp. 3231–3235).

  31. Giwa, O., & Davel, M. H. (2014). Language identification of individual words with joint sequence models. In Fifteenth annual conference of the international speech communication association (pp. 14–18).

  32. Giwa, O., & Davel, M. H. (2015). Text-based language identification of multilingual names. In Pattern recognition association of South Africa and robotics and mechatronics international conference (PRASA-RobMech) (pp. 166–171). IEEE.

  33. Gonzalez-Dominguez, J., Lopez-Moreno, I., Moreno, P. J., & Gonzalez-Rodriguez, J. (2015). Frame-by-frame language identification in short utterances using deep neural networks. Neural Networks, 64, 49–58.

    Article  Google Scholar 

  34. Gonzalez, D. R., & de Lara, J. R. C. (2009). Speaker verification with shifted delta cepstral features: Its pseudo-prosodic behaviour. In Proceedings of the I Iberian SLTech.

  35. Goodman, F. J., Martin, A. F., & Wohlford, R. E. (1989). Improved automatic language identification in noisy speech. In International conference on acoustics, speech, and signal processing ICASSP-89 (pp. 528–531). IEEE.

  36. Gupta, K., & Gupta, D. (2016). An analysis on LPC, RASTA and MFCC techniques in automatic speech recognition system. In 6th international conference in cloud system and big data engineering (confluence) (pp. 493–497). IEEE.

  37. Heigold, G. et al. (2013). Multilingual acoustic models using distributed deep neural networks. In IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 8619–8623). IEEE.

  38. Hermansky, H. (1990). Perceptual linear predictive (PLP) analysis of speech. The Journal of the Acoustical Society of America, 87(4), 1738–1752.

    Article  Google Scholar 

  39. Hinton, G., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.

    Article  Google Scholar 

  40. Hu, Y., & Loizou, P. C. (2007). Subjective comparison and evaluation of speech enhancement algorithms. Speech Communication, 49(7–8), 588–601.

    Article  Google Scholar 

  41. Hwang, I., Park, H. M., & Chang, J. H. (2016). Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection. Computer Speech and Language, 38, 1–12.

    Article  Google Scholar 

  42. Hynek, H., & Nelson, M. (1994). Rasta processing of speech. IEEE Transactions on Speech Audio Processing, 2(4), 578–589.

    Article  Google Scholar 

  43. Itrat, M., et al. (2017). Automatic language identification for languages of Pakistan. International Journal of Computer Science and Network Security, 17(2), 161–169.

    Google Scholar 

  44. Joudaki, S., et al. (2014). Vision-based sign language classification: A directional review. IETE Technical Review, 5(31), 383–391.

    Article  Google Scholar 

  45. Jin, M., et al. (2018). LID-senones and their statistics for language identification. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 26(1), 171–183.

    Article  Google Scholar 

  46. Komatsu, M. (2007). Reviewing human language identification. In Speaker classification II (pp. 206–228). Springer, Berlin.

  47. Koolagudi, S. G., & Rao, K. S. (2012). Emotion recognition from speech using source, system, and prosodic features. International Journal of Speech Technology, 15(2), 265–289.

    Article  Google Scholar 

  48. Lehner, B., Sonnleitner, R., & Widmer, G. (2013). Towards light-weight, real-time-capable singing voice detection. In ISMIR (pp. 53–58).

  49. Li, H., Ma, B., & Lee, C. H. (2007). A vector space modeling approach to spoken language identification. IEEE Transactions on Audio, Speech and Language Processing, 15(1), 271–284.

    Article  Google Scholar 

  50. Li, H., Ma, B., & Lee, K. A. (2013). Spoken language recognition: From fundamentals to practice. Proceedings of IEEE, 101(5), 1136–1159.

    Article  Google Scholar 

  51. Li, K. P. (1997). Automatic language identification/verification system. U.S. Patent 5,689,616 (Google Patents).

  52. Li, M., & Narayanan, S. (2014). Simplified supervised i-vector modeling with application to robust and efficient language identification and speaker verification. Computer Speech and Language, 28(4), 940–958.

    Article  Google Scholar 

  53. Loizou, P. C. (2007). Speech enhancement: Theory and practice. Boca Raton: CRC press.

    Book  Google Scholar 

  54. Lopez-Moreno, I., Gonzalez-Dominguez, J., Plchot, O., Martinez, D., Gonzalez-Rodriguez, J., & Moreno, P. (2014). Automatic language identification using deep neural networks. In 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 5337–5341). IEEE.

  55. Lopez-Moreno, I., et al. (2016). On the use of deep feed forward neural networks for automatic language identification. Computer Speech and Language, 40, 46–59.

    Article  Google Scholar 

  56. Lyu, D. C., Chng, E. S., & Li, H. (2013). Language diarization for conversational code-switch speech with pronunciation dictionary adaptation. In IEEE China summit and international conference on signal and information processing (pp. 147–150). IEEE.

  57. Malmasi, S., & Cahill, A. (2015). Measuring feature diversity in native language identification. In Proceedings of the tenth workshop on innovative use of NLP for building educational applications (pp. 49–55).

  58. Malmasi, S., Refaee, E., & Dras, M. (2015). Arabic dialect identification using a parallel multidialectal corpus. In International conference of the pacific association for computational linguistics (pp. 35–53). Springer, Singapore.

  59. Martinez, D., Burget, L., Ferrer, L., & Scheffer, N. (2012). iVector-based prosodic system for language identification. In IEEE international conference in acoustics, speech and signal processing (ICASSP) (pp. 4861–48640.

  60. Matejka, P., Burget, L., Schwarz, P., & Cernocky, J. (2006). Speaker and language recognition workshop in Brno university of technology system for nist 2005 language recognition evaluation. In Odyssey 2006 (pp. 1–7). IEEE.

  61. Matejka, P., et al. (2014). Neural network bottleneck features for language identification. In Proceedings of Odyssey 2014 speaker and language recognition workshop (pp. 299–304).

  62. Mehrabani, M., & Hansen, J. H. (2015). Automatic analysis of dialect/language sets. International Journal of Speech Technology, 18(3), 277–286.

    Article  Google Scholar 

  63. Mohamed, A. R., Dahl, G. E., & Hinton, G. (2012). Acoustic modeling using deep belief networks. IEEE Transactions on Audio, Speech and Language Processing, 20(1), 14–22.

    Article  Google Scholar 

  64. Moritz, N., Adiloğlu, K., Anemüller, J., Goetze, S., & Kollmeier, B. (2017). Multi-channel speech enhancement and amplitude modulation analysis for noise robust automatic speech recognition. Computer Speech & Language, 46, 558–573.

    Article  Google Scholar 

  65. Mun, S., Shon, S., Kim, W., & Ko, H. (2016). Deep neural network bottleneck features for acoustic event recognition. In INTERSPEECH (pp. 2954–2957).

  66. Nayana, P., Mathew, D., & Thomas, A. (2017). Performance comparison of speaker recognition systems using GMM and i-vector methods with PNCC and RASTA PLP features. In International conference on intelligent computing, instrumentation and control technologies(ICICICT) (pp. 438–443).

  67. Nercessian, S., Torres-Carrasquillo, P., & Martinez-Montes, G. (2016). Approaches for language identification in mismatched environments. Spoken language technology workshop (SLT) (pp. 335–340). IEEE.

  68. Ng, T., et al. (2012). Developing a speech activity detection system for the DARPA RATS program. In Thirteenth annual conference of the international speech communication association (pp. 1969–1972).

  69. Olvera, M. M., Sánchez, A., & Escobar, L. H. (2016). Web-based automatic language identification system. International Journal of Information and Electronics Engineering, 6(5), 304–307.

    Google Scholar 

  70. Padmanabhan, J., & Johnson Premkumar, M. J. (2015). Machine learning in automatic speech recognition: A survey. IETE Technical Review, 32(4), 240–251.

    Article  Google Scholar 

  71. Palo, H. K., Chandra, M., & Mohanty, M. N. (2017). Emotion recognition using MLP and GMM for Oriya language. International Journal of Computational Vision and Robotics, 7(4), 426–442.

    Article  Google Scholar 

  72. Phadikar, S., et al. (2017). Bengali phonetics identification using wavelet based signal feature. In International conference on computational intelligence, communications, and business analytics (pp. 253–265). Springer, Singapore.

  73. Poonkuzhali, C., Karthiprakash, R., Valarmathy, S., & Kalamani, M. (2013). An approach to feature selection algorithm based on ant colony optimization for automatic speech recognition. Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, 2(11), 5671–5678.

    Google Scholar 

  74. Qawaqneh, Z., Mallouh, A. A., & Barkana, B. D. (2017). Age and gender classification from speech and face images by jointly fine-tuned deep neural networks. Expert Systems with Applications, 85, 76–86.

    Article  Google Scholar 

  75. Qi, J., Wang, D., Xu, J., & Tejedor Noguerales, J. (2013). Bottleneck features based on gammatone frequency cepstral coefficients. In Interspeech.

  76. Rajput, N., & Verma, S. K. (2014). Back propagation feed forward neural network approach for speech recognition. In 2014 3rd international conference on reliability, Infocom technologies and optimization (ICRITO) (trends and future directions) (pp. 1–6). IEEE.

  77. Rao, K. S., Maity, S., & Reddy, V. R. (2013). Pitch synchronous and glottal closure based speech analysis for language recognition. International Journal of Speech Technology, 16(4), 413–430.

    Article  Google Scholar 

  78. Rao, K. S., Reddy, V. R., & Maity, S. (2015). Language identification using spectral and prosodic features. Berlin: Springer.

    Book  Google Scholar 

  79. Revathi, A., Jeyalakshmi, C., & Muruganantham, T. (2018). Perceptual features based rapid and robust language identification system for various indian classical languages. Computational vision and bio inspired computing (pp. 291–305). Cham: Springer.

    Google Scholar 

  80. Richardson, F., Reynolds, D., & Dehak, N. (2015). A unified deep neural network for speaker and language recognition. arXiv preprint arXiv:1504.00923.

  81. Richardson, F., Reynolds, D., & Dehak, N. (2015). Deep neural network approaches to speaker and language recognition. IEEE Signal Processing Letters, 22(10), 1671–1675.

    Article  Google Scholar 

  82. Sadjadi, S. O., & Hansen, J. H. (2015). Mean Hilbert envelope coefficients (MHEC) for Robust speaker and language identification. Speech Communication, 72, 138–148.

    Article  Google Scholar 

  83. Sahidullah, M., & Saha, G. (2012). Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition. Speech Communication, 54(4), 543–565.

    Article  Google Scholar 

  84. Sangwan, P. (2017). Feature Extraction for Speaker Recognition: A Systematic Study. Global Journal of Enterprise Information System, 9(4), 19–26.

    Google Scholar 

  85. Sangwan, P., & Bhardwaj, S. (2017). A structured approach towards robust database collection for speaker recognition. Global Journal of Enterprise Information System, 9(3), 53–58.

    Article  Google Scholar 

  86. Sarma, B. D., & Prasanna, S. M. (2018). Acoustic–phonetic analysis for speech recognition: A review. IETE Technical Review, 35(3), 305–327.

    Article  Google Scholar 

  87. Segbroeck, V., Travadi, M. R., & Narayanan, S. S. (2015). Rapid language identification. IEEE Transactions on Audio, Speech and Language Processing, 7, 1118–1129.

    Article  Google Scholar 

  88. Sharma, D. P., & Atkins, J. (2014). Automatic speech recognition systems: Challenges and recent implementation trends. International Journal of Signal and Imaging Systems Engineering, 7(4), 220–234.

    Article  Google Scholar 

  89. Sheikhan, M., Gharavian, D., & Ashoftedel, F. (2012). Using DTW neural–based MFCC warping to improve emotional speech recognition. Neural Computer and Application, 21, 1765–1773.

    Article  Google Scholar 

  90. Sim, K. C., & Li, H. (2008). On acoustic diversification front-end for spoken language identification. IEEE Transactions on Audio, Speech and Language Processing, 16(5), 1029–1037.

    Article  Google Scholar 

  91. Singer, E., et al. (2003). Acoustic phonetic and discriminative approaches to automatic language recognition. In Proceedings of the Eurospeech (pp. 1345–1348).

  92. Sivaraman, G., et al. (2016). Vocal tract length normalization for speaker independent acoustic-to-articulatory speech inversion. In INTERSPEECH (pp. 455–459).

  93. Song, Y., et al. (2015). Improved language identification using deep bottleneck network. In IEEE international conference in acoustics, speech and signal processing (ICASSP) (pp. 4200–4204). IEEE.

  94. Sundermeyer, M., Schlüter, R., & Ney, H. (2012). LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association, USA (pp. 194–197).

  95. Sun, Y., Wen, G., & Wang, J. (2015). Weighted spectral features based on local Hu moments for speech emotion recognition. Journal of Biomedical Signal Processing and Control, 18, 80–90.

    Article  Google Scholar 

  96. Takçı, Hidayet, & Güngör, T. (2012). A high performance centroid-based classification approach for language identification. Pattern Recognition Letters, 33(16), 2077–2084.

    Article  Google Scholar 

  97. Tang, Z., et al. (2018). Phonetic temporal neural model for language identification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(1), 134–144.

    Article  Google Scholar 

  98. Thirumuru, R., & Vuppala, A. K. (2018). Automatic detection of retroflex approximants in a continuous tamil speech. Circuits, Systems, and Signal Processing, 37(7), 2837–2851.

    Article  MathSciNet  Google Scholar 

  99. Torres-Carrasquillo, P. A., et al. (2002). Approaches to language identification using Gaussian mixture models and shifted delta cepstral features. In Seventh international conference on spoken language processing.

  100. Upadhyay, N., & Karmakar, A. (2015). Speech enhancement using spectral subtraction-type algorithms: A comparison and simulation study. Procedia Computer Science, 54, 574–584.

    Article  Google Scholar 

  101. Verma, P., & Das, P. K. (2015). i-Vectors in speech processing applications: A survey. International Journal of Speech Technology, 18(4), 529–546.

    Article  Google Scholar 

  102. Viana, H. O., & Mello, C. A. (2014). Speech description through MINERS: Model invariant to noise and environment robust for speech. In 2014 IEEE international conference on systems, man and cybernatics (SMC) (pp. 489–494). IEEE.

  103. Vu, N., Imseng, D., Povey, D., & Molicek, P., Schultz, T., & Bourlard, H. (2014). Multilingual deep neural network based acoustic modelling for rapid language adaptation. In 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 7639–7643). IEEE.

  104. Wang, H., et al. (2013). Shifted-delta mlp features for spoken language recognition. IEEE Signal Processing Letters, 20(1), 15–18.

    Article  Google Scholar 

  105. Wang, H., Xu, Y., & Li, M. (2011). Study on the MFCC similarity-based voice activity detection algorithm. In 2nd international conference in artificial intelligence, management science and electronic commerce (AIMSEC) (pp. 4391–4394). IEEE.

  106. Weninger, F., et al. (2014). Feature enhancement by deep LSTM networks for ASR in reverberant multisource environments. Computer Speech and Language (Elsevier), 28(4), 888–902.

    Article  Google Scholar 

  107. Williamson, D. S., & Wang, D. (2017). Speech dereverberation and denoising using complex ratio masks. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 5590–5594). IEEE.

  108. Wong, K. Y. E. (2004). Automatic spoken language identification utilizing acoustic and phonetic speech information. Doctoral dissertation, Queensland University of Technology.

  109. Yapanel, U., & Hansen, J. (2008). A new perceptually motivated MVDR-based acoustic front-end (PMVDR) for robust automatic speech recognition. Journal of Speech Communication, 50, 142–152.

    Article  Google Scholar 

  110. Yilmaz, E., McLaren, M., van den Heuvel, H., & van Leeuwen, D. A. (2017). Language diarization for semi-supervised bilingual acoustic model training. Automatic speech recognition and understanding workshop (ASRU). 2017 IEEE (pp. 91–96). IEEE.

  111. Zazo, R., et al. (2016). Language identification in short utterances using long short-term memory (LSTM) recurrent neural networks. PloS One, 11(1), e0146917.

    Article  Google Scholar 

  112. Zhang, X. L., & Wu, J. (2013). Deep belief networks based voice activity detection. IEEE Transactions on Audio, Speech and Language Processing, 21(4), 697–710.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepti Deshwal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deshwal, D., Sangwan, P. & Kumar, D. Feature Extraction Methods in Language Identification: A Survey. Wireless Pers Commun 107, 2071–2103 (2019). https://doi.org/10.1007/s11277-019-06373-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-019-06373-3

Keywords

Navigation