Skip to main content
Log in

Tri-integrated convolutional neural network for audio image classification using Mel-frequency spectrograms

  • 1178: Pattern Recognition for Adaptive User Interfaces
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Emotion is a state which encompasses a variety of physiological phenomena. Classification of emotions has many applications in fields like customer review, product evaluation, national security, etc., thus making it a prominent area of research. The state-of-art methodologies have used either text or audio files to classify emotions which is in contrast to the proposed work which utilizes the Mel-frequency spectrograms. An integrated methodology TiCNN (Tri integrated Convolutional Neural Network) has been proposed for classifying emotions into eight different classes. Three models namely VGG16, VGG19, and a proposed CNN architecture have been integrated and trained on the RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song) dataset. The proposed integrated TiCNN approach classifies emotions into eight different classes with an accuracy of 93.27%. Precision, recall and F1-Score of 0.93, 0.92 and 0.92 have also been used as metrics to evaluate the performance of the proposed model. Further, for model validation, the efficiency and efficacy of the proposed methodology have been compared and analysed with the EMO-DB (Berlin Database of Emotional Speech) dataset. The proposed TiCNN model gives an accuracy of 92.78% on the EMO-DB dataset. Empirical evaluation of the proposed methodology has been compared with conventional transfer learning models and state-of-the-art methodologies, where it has shown its superiority over others.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Akyol K (2020) Stacking ensemble based deep neural networks modelling for effective epileptic seizure detection. Expert Syst Appl 148:113239

  2. Arriaga O, Valdenegro-Toro M, Plöger P (2017) Realtime convolutional neural networks for emotion and gender classification. arXiv preprint arXiv:1710.07557

  3. Badshah AM, Ahmad J, Rahim N, Baik SW (2017) Speech emotion recognition from spectrograms with deep convolutional neural network. In: 2017 international conference on platform technology and service (PlatCon). IEEE. (pp. 1-5)

  4. Beard R, Das R, Ng RW, Gopalakrishnan PK, Eerens L, Swietojanski P, Miksik O (2018) Multi-modal sequence fusion via recursive attention for emotion recognition. In: Proceedings of the 22nd Conference on Computational Natural Language Learning (pp. 251–259)

  5. Bloch S, Lemeignan M, Aguilera-T N (1991) Specific respiratory patterns distinguish between basic human emotions. Int J Psychophysiol 11(2):141–154

  6. Bourbakis N, Esposito A, Kavraki D (2010) Extracting and associating meta-features for understanding people’s emotional behaviour: face and speech. Cogn Comput 3(3):436–448

  7. Bradlow AR, Torretta GM, Pisoni DB (1996) Intelligibility of normal speech I: global and fine-grained acoustic-phonetic talker characteristics. Speech Comm 20(3):255–272

    Article  Google Scholar 

  8. Braunschweiler N, Doddipatla R, Keizer S, Stoyanchev S (2022) Factors in Emotion Recognition with Deep Learning Models Using Speech and Text on Multiple Corpora. IEEE Signal Processing Letters 29:722–726

    Article  Google Scholar 

  9. Burkhardt F, Paeschke A, Rolfes M, Sendlmeier WF, Weiss B (2005) A database of German emotional speech In Ninth European Conference on Speech Communication and Technology

  10. Byun SW, Lee SP (2020) Human emotion recognition based on the weighted integration method using image sequences and acoustic features. Multimed Tools Appl 80:35871–35885

    Article  Google Scholar 

  11. Calvo RA, D'Mello S (2010) Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans Affect Comput 1(1):18–37

    Article  Google Scholar 

  12. Chatziagapi A, Paraskevopoulos G, Sgouropoulos D, Pantazopoulos G, Nikandrou M, Giannakopoulos T, Katsamanis A, Potamianos A, Narayanan S (2019) Data Augmentation Using GANs for Speech Emotion Recognition. In: INTERSPEECH (pp. 171–175)

  13. Chetouani M, Mahdhaoui A, Ringeval F (2009) Time-scale feature extractions for emotional speech characterization. Cogn Comput 1(2):194–201

    Article  Google Scholar 

  14. Cummins N, Amiriparian S, Hagerer G, Batliner A, Steidl S, Schuller BW (2017) An image-based deep spectrum feature representation for the recognition of emotional speech. In: Proceedings of the 25th ACM international conference on Multimedia. (pp. 478–484)

  15. Dangol R, Alsadoon A, Prasad PW, Seher I, Alsadoon OH (2020) Speech emotion recognition using convolutional neural network and long-short term memory. Multimed Tools Appl 79(43):32917–32934

    Article  Google Scholar 

  16. Deng J, Zhang Z, Marchi E, Schuller B (2013) Sparse autoencoder-based feature transfer learning for speech emotion recognition. In: Human association conference on affective computing and intelligent interaction. IEEE. (pp. 511-516)

  17. Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10(7):1895–1923

    Article  Google Scholar 

  18. Esposito A (2009) The perceptual and cognitive role of visual and auditory channels in conveying emotional information. Cogn Comput 1(3):268–278

    Article  Google Scholar 

  19. Fan Y, Lam JC, Li VO (2018) Video-based emotion recognition using deeply-supervised neural networks. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction. (pp. 584–588)

  20. Farooq M, Hussain F, Baloch NK, Raja FR, Yu H, Zikria YB (2020) Impact of feature selection algorithm on speech emotion recognition using deep convolutional neural network. Sensors. 20(21):6008

    Article  Google Scholar 

  21. Fayek HM, Lech M, Cavedon L (2017) Evaluating deep learning architectures for speech emotion recognition. Neural Netw 92:60–68

    Article  Google Scholar 

  22. Flanagan JL (2013) Speech analysis synthesis and perception. Springer Science & Business Media

    Google Scholar 

  23. Gonzalez G, De La Rosa JL, Montaner M, Delfin S. (2007) Embedding emotional context in recommender systems. In: IEEE 23rd international conference on data engineering workshop. IEEE. pp. 845-852

  24. Goodwin J, Jasper JM (2006) Emotions and social movements. In: Handbook of the sociology of emotions. Springer, Boston, pp 611–635

    Chapter  Google Scholar 

  25. Huang KY, Wu CH, Hong QB, Su MH, Chen YH. (2019) Speech emotion recognition using deep neural network considering verbal and nonverbal speech sounds. In: IEEE International Conference on Acoustics, Speech and Signal Processing. (pp. 5866–5870)

  26. Hussain M, Haque MA (2018) Swishnet: A fast convolutional neural network for speech, music and noise classification and segmentation. arXiv preprint arXiv:1812.00149

  27. Issa D, Demirci MF, Yazici A (2020) Speech emotion recognition with deep convolutional neural networks. Biomed Signal Process Control 59:101894

    Article  Google Scholar 

  28. Jahangir R, Teh YW, Hanif F, Mujtaba G (2021) Deep learning approaches for speech emotion recognition: state of the art and research challenges. Multimed Tools Appl 80:23745–23812

    Article  Google Scholar 

  29. Jiang P, Fu H, Tao H (2019) Speech emotion recognition using deep convolutional neural network and simple recurrent unit. Eng Lett 27(4)

  30. Kennedy-Moore E, Watson JC (2001) Expressing emotion: myths, realities, and therapeutic strategies. Guilford Press

    Google Scholar 

  31. Kumar D, Jain N, Khurana A, Mittal S, Satapathy SC, Senkerik R, Hemanth JD (2020) Automatic detection of white blood Cancer from bone marrow microscopic images using convolutional neural networks. IEEE Access 8:142521–142531

    Article  Google Scholar 

  32. Kumaran U, Rammohan SR, Nagarajan SM, Prathik A (2021) Fusion of mel and gammatone frequency cepstral coefficients for speech emotion recognition using deep C-RNN. Int J Speech Technol 24:303–314

    Article  Google Scholar 

  33. Latif S, Rana R, Younis S, Qadir J, Epps J (2018) Transfer learning for improving speech emotion classification accuracy. arXiv preprint arXiv:1801.06353

  34. Lee MC, Chiang SY, Yeh SC, Wen TF (2020) Study on emotion recognition and companion Chatbot using deep neural network. Multimed Tools Appl 79(27):19629–19657

    Article  Google Scholar 

  35. Leeper LH, Culatta R (1995) Speech fluency: Effect of age, gender, and context. Folia Phoniatr Logop 47(1):1–4

    Article  Google Scholar 

  36. Li J, Zhang Z, He H (2017) Hierarchical convolutional neural networks for EEG-based emotion recognition. Cogn Comput 10(2):368–380

    Article  Google Scholar 

  37. Li S, Zheng W, Zong Y, Lu C, Tang C, Jiang X, Liu J, Xia W (2019) Bi-modality Fusion for Emotion Recognition in the Wild. In: 2019 International Conference on Multimodal Interaction. (pp. 589–594)

  38. Likitha MS, Gupta SR, Hasitha K, Raju AU (2017) Speech based human emotion recognition using MFCC. In: 2017 international conference on wireless communications, signal processing and networking (WiSPNET). IEEE. (pp. 2257-2260)

  39. Lindblom B (1996) Role of articulation in speech perception: clues from production. J Acoust Soc Am 99(3):1683–1692

    Article  Google Scholar 

  40. Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 13(5):e0196391

    Article  Google Scholar 

  41. Luna-Jiménez C, Griol D, Callejas Z, Kleinlein R, Montero JM, Fernández-Martínez F (2021) Multimodal emotion recognition on RAVDESS dataset using transfer learning. Sensors. 21(22):7665

    Article  Google Scholar 

  42. Ma F, Li Y, Ni S, Huang S, Zhang L (2022) Data augmentation for audio-visual emotion recognition with an efficient multimodal conditional GAN. Appl Sci 12(1):527

    Article  Google Scholar 

  43. Mao Q, Dong M, Huang Z, Zhan Y (2014) Learning salient features for speech emotion recognition using convolutional neural networks. IEEE Trans Multimedia 16(8):2203–2213

    Article  Google Scholar 

  44. Mohsin M, Hemavathi D (2020) Emotion speech recognition through deep learning. InInternational conference on computational vision and bio inspired computing 2018 Nov 29 (pp. 1363-1369). Springer, Cham

  45. Nguyen D, Nguyen K, Sridharan S, Ghasemi A, Dean D, Fookes C (2017) Deep spatio-temporal features for multimodal emotion recognition. In: 2017 IEEE winter conference on applications of computer vision (WACV). IEEE. (pp. 1215-1223)

  46. Ocquaye EN, Mao Q, Xue Y, Song H (2021) Cross lingual speech emotion recognition via triple attentive asymmetric convolutional neural network. Int J Intell Syst 36:53–71

    Article  Google Scholar 

  47. Ouyang X, Kawaai S, Goh EG, Shen S, Ding W, Ming H, Huang DY (2017) Audio-visual emotion recognition using deep transfer learning and multiple temporal models. In: Proceedings of the 19th ACM International Conference on Multimodal Interaction. (pp. 577–582)

  48. Peng S, Zhang L, Ban Y, Fang M, Winkler S (2018) A deep network for arousal-valence emotion prediction with acoustic-visual cues. arXiv preprint arXiv:1805.00638

  49. Popova AS, Rassadin AG, Ponomarenko AA (2018) Emotion recognition in sound. In: International conference on neuro informatics 2017 Oct 2. Springer, Cham. (pp. 117-124)

  50. Rodríguez P, Bautista MA, Gonzalez J, Escalera S (2018) Beyond one-hot encoding: lower dimensional target embedding. Image Vis Comput 75:21–31

    Article  Google Scholar 

  51. Salamon J, Bello JP (2017) Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters 24(3):279–283

    Article  Google Scholar 

  52. Satt A, Rozenberg S, Hoory R (2017) Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. In INTERSPEECH (pp. 1089–1093)

  53. Schlüter J, Grill T (2015) Exploring Data Augmentation for Improved Singing Voice Detection with Neural Networks. In: ISMIR (pp. 121–126)

  54. Schuller B, Steidl S, Batliner A, Burkhardt F, Devillers L, MüLler C, Narayanan S (2013) Paralinguistics in speech and language—State-of-the-art and the challenge. Comput Speech Lang 27(1):4–39

    Article  Google Scholar 

  55. Shahin I, Hindawi N, Nassif AB, Alhudhaif A, Polat K (2022) Novel dual-channel long short-term memory compressed capsule networks for emotion recognition. Expert Syst Appl 188:116080

    Article  Google Scholar 

  56. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  57. Song P, Jin Y, Zhao L, Xin M (2014) Speech emotion recognition using transfer learning. IEICE Trans Inf Syst 97(9):2530–2532

    Article  Google Scholar 

  58. Tits N, Haddad KE, Dutoit T. (2018) ASR-based features for emotion recognition: A transfer learning approach. arXiv preprint arXiv:1805.09197

  59. Umer S, Rout RK, Pero C, Nappi M (2021) Facial expression recognition with trade-offs between data augmentation and deep learning features. J Ambient Intell Humaniz Comput 13:721–735

    Article  Google Scholar 

  60. Venkataramanan K, Rajamohan HR (2019) Emotion Recognition from Speech. arXiv preprint arXiv:1912.10458

  61. Ververidis D, Kotropoulos C (2006) Emotional speech recognition: resources, features, and methods. Speech Comm 48(9):1162–1181

    Article  Google Scholar 

  62. Weiss K, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. Journal of Big data 3(1):9

    Article  Google Scholar 

  63. Zhang W, Du T, Wang J (2016) Deep learning over multi-field categorical data. In: European conference on information retrieval. Springer, Cham. pp 45–57

  64. Zhang S, Zhang S, Huang T, Gao W (2017) Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Trans Multimedia 20(6):1576–1590

    Article  Google Scholar 

  65. Zhang H, Huang B, Tian G (2021) Facial expression recognition based on deep convolution long short-term memory networks of double-channel weighted mixture. Pattern recognition letters. 2020 mar 1;131:128-34. Liao, H., Wang, D., Fan, P. et al. deep learning enhanced attributes conditional random forest for robust facial expression recognition. Multimed Tools Appl 80:28627–28645

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepika Kumar.

Ethics declarations

The authors declare that they do not have any conflict of interest. This research did not involve any human or animal participation. All authors have checked and agreed on the submission.

Conflict of interest

All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version. This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue.

All authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript.

All authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khurana, A., Mittal, S., Kumar, D. et al. Tri-integrated convolutional neural network for audio image classification using Mel-frequency spectrograms. Multimed Tools Appl 82, 5521–5546 (2023). https://doi.org/10.1007/s11042-022-13358-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-13358-1

Keywords

Navigation