Skip to main content

Advertisement

Log in

Recognition of emotion from speech using evolutionary cepstral coefficients

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

An optimal representation of acoustic features is an ongoing challenge in automatic speech emotion recognition research. In this study, we proposed Cepstral coefficients based on evolutionary filterbanks as emotional features. It is difficult to guarantee that an individual optimized filterbank provides the best representation for emotion classification. Consequently, we employed six HMM-based binary classifiers that used a specific filterbank, which was optimized by a genetic algorithm to categorize the data into seven emotion classes. These optimized classifiers were applied in a hierarchical manner and outperformed conventional Mel Frequency Cepstral Coefficients in terms of overall emotion classification accuracy. The proposed method using evolutionary-based Cepstral coefficients achieved a weighted average recall of 87.29% on the Berlin database while the same approach but using conventional Cepstral features achieved only 79.63%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Aggarwal RK, Dave M (2012) Filterbank optimization for robust asr using ga and pso. Int J Speech Technol 15(2):191–201

    Google Scholar 

  2. Ananthapadmanabha TV, Fant G (1982) Calculation of true glottal flow and its components. Speech Comm 1(3–4):167–184

    Google Scholar 

  3. Anne KR, Kuchibhotla S, Vankayalapati HD (2015) Acoustic modeling for emotion recognition Springer

  4. Arroabarren I, Carlosena A (2006) Voice production mechanisms of vocal vibrato in male singers. In: IEEE Transactions on Audio Speech, and Language Processing, vol 15, pp 320–332

  5. Back T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press, New York

    MATH  Google Scholar 

  6. Badshah AM, Rahim N, Ullah N, Ahmad J, Muhammad K, Mi YL, Kwon S, Baik SW (2019) Deep features-based speech emotion recognition for smart affective services. Multimed Tools Appl 78(5):5571–5589

    Google Scholar 

  7. Bao W, Li Y, Gu M, Yang M, Li H, Chao L, Tao J (2014) Building a chinese natural emotional audio-visual database. In: 2014 12th International Conference on Signal Processing (ICSP). IEEE, pp 583–587

  8. Batliner A, Steidl S, Nöth E (2008) Releasing a thoroughly annotated and processed spontaneous emotional database: the fau aibo emotion corpus. In: Proc. of a Satellite Workshop of LREC, vol 28

  9. Bhargava M, Polzehl T (2013) Improving automatic emotion recognition from speech using rhythm and temporal feature. arXiv:1303.1761

  10. Bitouk D, Verma R, Nenkova A (2010) Class-level spectral features for emotion recognition. Speech communication 52(7-8):613–625

    Google Scholar 

  11. Burkhardt F, Paeschke A, Rolfes M, Sendlmeier WF, Benjamin W (2005) A database of german emotional speech

  12. Busso C, Bulut M, Lee C-C, Kazemzadeh A, Mower E, Kim S, Chang JN, Lee S, Iemocap SSN (2008) Interactive emotional dyadic motion capture database. Lang Resour Eval 42(4):335

    Google Scholar 

  13. Charbuillet C, Gas B, Chetouani M, Zarader JL (2007) Multi filter bank approach for speaker verification based on genetic algorithm. In: International Conference on Nonlinear Speech Processing, pages 105–113. Springer

  14. Charbuillet C, Gas B, Chetouani M, strategy J-LZ (2009) Optimizing feature complementarity by evolution Application to automatic speaker verification. Speech Comm 51(9):724–731

    Google Scholar 

  15. Daneshfar F, Kabudian SJ (2020) Speech emotion recognition using discriminative dimension reduction by employing a modified quantum-behaved particle swarm optimization algorithm. Multimedia Tools and Applications 79 (1):1261–1289

    Google Scholar 

  16. Davis S, Mermelstein P (1980) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics Speech, and Signal Processing 28(4):357–366

    Google Scholar 

  17. Davis SB, Mermelstein P (1990) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. In: Readings in speech recognition, pages 65–74. Elsevier

  18. Deb S, Dandapat S (2016) Emotion classification using residual sinusoidal peak amplitude. In: International Conference on Signal Processing and Communications (SPCOM) pages 1–5 IEEE

  19. Deb Suman, Dandapat S (2017) Exploration of phase information for speech emotion classification. In: Twenty-third National Conference on Communications (NCC), pages 1–5 IEEE, p 2017

  20. Deller John R, Proakis John G, Hansen John HL (2000) Discrete-time processing of speech signals Institute of Electrical and Electronics Engineers

  21. Demircan S, Kahramanli H (2018) Application of fuzzy c-means clustering algorithm to spectral features for emotion classification from speech. Neural Comput & Applic 29(8):59–66

    Google Scholar 

  22. Demuynck K, Duchateau J, Compernolle DV, Wambacq P (1998) Improved feature decorrelation for hmm-based speech recognition. In: Fifth International Conference on Spoken Language Processing

  23. Dua M, Aggarwal RK, Biswas M (2018) Performance evaluation of hindi speech recognition system using optimized filterbanks. Engineering Science and Technology, an International Journal 21(3):389–398

    Google Scholar 

  24. El Ayadi M, Kamel MS, Fakhri K (2011) Survey on speech emotion recognition Features, classification schemes, and databases. Pattern Recogn 44(3):572–587

    MATH  Google Scholar 

  25. Graves A, Jaitly N (2014) Towards end-to-end speech recognition with recurrent neural networks

  26. Grimm M, Kroschel K, Mower E, Narayanan S (2007) Primitives-based evaluation and estimation of emotions in speech. Speech Comm 49 (10-11):787–800

    Google Scholar 

  27. Holland JH, et al. (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence MIT Press

  28. Huang L-X, Evangelista G, Zhang X-Y (2011) Adaptive bands filter bank optimized by genetic algorithm for robust speech recognition system. Journal of Central South University of Technology 18(5):1595

    Google Scholar 

  29. Huang X, Acero A, Hon H-W, Reddy R (2001) Spoken language processing: a guide to theory, algorithm and system development, vol 1. Prentice Hall PTR, Upper Saddle River, NJ, USA

    Google Scholar 

  30. Issa D, Fatih Demirci M, Yazici A (2020) Speech emotion recognition with deep convolutional neural networks. Biomedical Signal Processing and Control 101894:59

    Google Scholar 

  31. Jackson P, Haq S (2014) Surrey audio-visual expressed emotion (savee) database University of Surrey. Guildford, UK

    Google Scholar 

  32. Jankowski CR, Vo H-DH, Lippmann RP (1995) A comparison of signal processing front ends for automatic word recognition. IEEE Transactions on Speech and Audio processing 3(4):286–293

    Google Scholar 

  33. Juang BH, Rabiner LR (1991) Hidden markov models for speech recognition. Technometrics 33(3):251–272

    MathSciNet  MATH  Google Scholar 

  34. Kalinli O (2016) Analysis of multi-lingual emotion recognition using auditory attention features. In: INTERSPEECH 8-12 Sep 2016 San Francisco 3613–3617 09

  35. Kerkeni L, Serrestou Y, Mbarki M, Raoof K, Mahjoub MA (2018) Speech emotion recognition Methods and cases study. In ICAART 2:175–182

    Google Scholar 

  36. Khan A, Roy UK (2017) Emotion recognition using prosodie and spectral features of speech and naïve bayes classifier. In: 2017 international conference on wireless communications, signal processing and networking (WiSPNET), pages 1017–1021. IEEE

  37. Kim J, Englebienne G, Truong KP, Evers V (2017). arXiv:1708.03920

  38. Koduru A, Valiveti HB, Budati AK (2020) Feature extraction algorithms to improve the speech emotion recognition rate. International Journal of Speech Technology 23(1):45–55

    Google Scholar 

  39. Kuchibhotla S, Vankayalapati HD, Anne Koteswara R (2016) An optimal two stage feature selection for speech emotion recognition using acoustic features. International Journal of Speech Technology 19(4):657–667

    Google Scholar 

  40. Kwon O-W, Lee T-W (2004) Phoneme recognition using ica-based feature extraction and transformation. Signal Process 84(6):1005–1019

    MATH  Google Scholar 

  41. Lalitha S, Geyasruti D, Narayanan R, Shravani M (2015) Emotion detection using mfcc and cepstrum features. Procedia Computer Science 70:29–35

    Google Scholar 

  42. Li L, Zhao Y, Jiang D, Zhang Y, Wang F, Gonzalez I, Valentin E, Sahli H (2013) Hybrid deep neural network-hidden markov model (dnn-hmm) based speech emotion recognition. In: Affective Computing and Intelligent Interaction ACII Hybrid deep neural network–hidden markov Humaine Association Conference on, pages 312–317 IEEE, p 2013

  43. Likitha MS, Gupta SRR, Hasitha K, Raju AU (2017) Speech based human emotion recognition using mfcc. In: 2017 international conference on wireless communications, signal processing and networking (WiSPNET), pages 2257–2260. IEEE

  44. Liu Z-T, Xie Q, Min W, Cao W-H, Mei Y, Mao J-W (2018) Speech emotion recognition based on an improved brain emotion learning model. Neurocomputing 309:145–156

    Google Scholar 

  45. Lotfidereshgi R, Gournay P (2017) Biologically inspired speech emotion recognition. In: 2017 IEEE International Conference On Acoustics, Speech and Signal Processing (ICASSP), pages 5135–5139 IEEE

  46. Lugger M, Yang B (2008) Cascaded emotion classification via psychological emotion dimensions using a large set of voice quality parameters. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4945–4948 IEEE

  47. Milton A, Tamil SS (2014) Class-specific multiple classifiers scheme to recognize emotions from speech signals. Computer Speech & Language 28(3):727–742

    Google Scholar 

  48. Mirhassani SM, Ting HN, Gharahbagh AA (2016) Fuzzy decision fusion of complementary experts based on evolutionary cepstral coefficients for phoneme recognition. Digital Signal Processing 49:116–125

    Google Scholar 

  49. Pohjalainen J, Alku P (2014) Multi-scale modulation filtering in automatic detection of emotions in telephone speech. In: IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP pages 980–984 IEEE

  50. Raudys SJ, Jain AK (1991) Small sample size effects in statistical pattern recognition Recommendations for practitioners. IEEE Transactions on Pattern Analysis & Machine Intelligence 3:252–264

    Google Scholar 

  51. Sreenivasa R, Koolagudi K , Shashidhar G (2012) Emotion recognition using speech features Springer Science & Business Media

  52. Sreenivasa Rao, Koolagudi K , Shashidhar G (2013) Robust emotion recognition using spectral and prosodic features Springer Science & Business Media

  53. Scherer S, Schwenker F, Palm G (2007) Classifier fusion for emotion recognition from speech

  54. Schuller B, Batliner A, Seppi D, Steidl S, Vogt T, Wagner J, Devillers L, Vidrascu L, Amir N, Kessousm L, et al. (2007) The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals. In: Eighth Annual Conference of the International Speech Communication Association

  55. Sekkate S, Khalil M, Adib A, Jebara SB (2019) An investigation of a feature-level fusion for noisy speech emotion recognition. Computers 8 (4):91

    Google Scholar 

  56. Semwal N, Kumar A, Sakthivel N (2017) Automatic speech emotion detection system using multi-domain acoustic feature selection Classification models.. In: IEEE International Conference on Identity, Security and Behavior Analysis ISBA pages 1–6 IEEE, p 2017

  57. Shahzadi A, Ahmadyfard A, Harimi A, Yaghmaie K (2015) Speech emotion recognition using nonlinear dynamics features. Turkish Journal of Electrical Engineering & Computer Sciences 23(Sup. 1):2056–2073

    Google Scholar 

  58. Shahzadi A, Ahmadyfard A, Yaghmaie K, Harimi A (2013) Recognition of emotion in speech using spectral patterns. Malaysian Journal of Computer Science 26(2):140–158

    Google Scholar 

  59. Shirani A, Nilchi ARN (2016) Speech emotion recognition based on svm as both feature selector and classifier. International Journal of Image, Graphics & Signal Processing 8(4)

  60. Sinith MS, Aswathi E, Deepa TM, Shameema CP, Rajan S (2015) Emotion recognition from audio signals using support vector machine. In: 2015 IEEE Recent Advances Intelligent Computational Systems RAICS pages 139–144 IEEE

  61. Skowronski MD, Harris JG (2002) Increased mfcc filter bandwidth for noise-robust phoneme recognition. In: Acoustics Speech an Signal Processing ICASSP IEEE International Conference on, volume 1, pages I–801 IEEE

  62. Slaney M (1998) Auditory toolbox. Interval Research Corporation, Tech Rep 10(1998)

  63. Story BH (2002) An overview of the physiology, physics and modeling of the sound source for vowels. Acoust Sci Technol 23(4):195–206

    Google Scholar 

  64. Sun Y, Wen G (2017) Ensemble softmax regression model for speech emotion recognition. Multimedia Tools and Applications 76(6):8305–8328

    Google Scholar 

  65. Sun Y, Wen G, Wang J (2015) Weighted spectral features based on local hu moments for speech emotion recognition. Biomedical Signal Processing and Control 18:80–90

    Google Scholar 

  66. Toolkit HMM (2002) Version 3.2, Cambridge University Engineering Department, Cambridge UK (2002)

  67. Trigeorgis G, Ringeval F, Brueckner R, Marchi E, Nicolaou MA, Schuller B, Zafeiriou S (2016) Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. In: Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 5200–5204. IEEE

  68. Vignolo LD, Rufiner HL, Milone DH, Goddard JC (2011) Evolutionary cepstral coefficients. Appl Soft Comput 11(4):3419–3428

    Google Scholar 

  69. Vlasenko B, Schuller B, Wendemuth A, Rigoll G (2007) Combining frame and turn-level information for robust recognition of emotions within speech. In: Proc. INTERSPEECH Combining Antwerp, Belgium

  70. Wen G, Li H, Huang J, Li D, Xun E (2017) Random deep belief networks for recognizing emotions from speech signals

  71. Wu S, Falk TH, Chan W-Y (2011) Automatic speech emotion recognition using modulation spectral features. Speech Comm 53(5):768–785

    Google Scholar 

  72. Yang N, Yuan J, Zhou Y, Demirkol I, Duan Z, Heinzelman W, Sturge-Apple M (2017) Enhanced multiclass svm with thresholding fusion for speech-based emotion classification. International Journal of Speech Technology 20(1):27–41

    Google Scholar 

  73. Yoon S-A, Son G, Kwon S (2019) Fear emotion classification in speech by acoustic and behavioral cues. Multimedia Tools and Applications 78 (2):2345–2366

    Google Scholar 

  74. Yüncü E, Hacihabiboglu H, Bozsahin C (2014) Automatic speech emotion recognition using auditory models with binary decision tree and svm. In: 2014 22nd International Conference on Pattern Recognition, pages 773–778. IEEE

  75. Zaidan NA, Salam MS (2016) Mfcc global features selection in improving speech emotion recognition rate. In: Advances in machine learning and signal processing, pages 141–153. Springer International Publishing, Cham

  76. Zao L, Cavalcante D, Rosângela C (2014) Time-frequency feature and ams-gmm mask for acoustic emotion classification. IEEE Signal Processing Letters 21(5):620–624

    Google Scholar 

  77. Zhang S, Zhang S, Huang T, Gao W (2017) Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Transactions on Multimedia 20(6):1576–1590

    Google Scholar 

  78. Zhang S, Zhao X, Lei B (2013) Speech emotion recognition using an enhanced kernel isomap for human-robot interaction. Int J Adv Robot Syst 10(2):114

    Google Scholar 

  79. Zhang T, Zheng W, Cui Z, Zong Y, Yan J, Yan K (2016) A deep neural network-driven feature learning method for multi-view facial expression recognition. IEEE Transactions on Multimedia 18(12):2528–2536

    Google Scholar 

  80. Zhang Z, Wu B, Schuller B (2019) Attention-augmented end-to-end multi-task learning for emotion prediction from speech. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6705–6709. IEEE

  81. Zhao J, Mao X, Chen L (2019) Speech emotion recognition using deep 1d & 2d cnn lstm networks. Biomedical Signal Processing and Control 47:312–323

    Google Scholar 

  82. Zhou X, Guo J, Bie R (2016) Deep learning based affective model for speech emotion recognition. In: Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications Cloud and Big Data Computing, Internet of People, and Smart World Congress UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld 2016 Intl IEEE Conferences, pages 841–846. IEEE

  83. Zwicker E (1961) Subdivision of the audible frequency range into critical bands (frequenzgruppen). The Journal of the Acoustical Society of America 33 (2):248–248

    Google Scholar 

Download references

Acknowledgment

Ali Bakhshi was supported by a UNIPRS scholarship at The University of Newcastle for his PhD.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Bakhshi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bakhshi, A., Chalup, S., Harimi, A. et al. Recognition of emotion from speech using evolutionary cepstral coefficients. Multimed Tools Appl 79, 35739–35759 (2020). https://doi.org/10.1007/s11042-020-09591-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09591-1

Keywords

Navigation