Skip to main content
Log in

Human auditory model based real-time smart home acoustic event monitoring

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this work Gammatone (GT) filter bank energy features are used with a deep neural network (GT-DNN) to model robust acoustic event detection (AED) in the smart home environment for monitoring human activities. The Gammatone filter bank is modelled for the human auditory system which decomposes the environmental sound events into multiple frequency bands energy features. These features are learned during the training phase of DNN which is similar to the AED task by a human brain. Gammatone filter bank energy features are found superior over popular Mel-scale filter bank features and Gammatone filter bank output provides smooth spectrogram patterns that help to identify the dominant characteristic features of the target events. Moreover, the auditory feature-based Gammatone filter bank approach showed its robustness against the noise compared to Mel-scale filter bank features. In this work, the proposed GT-DNN model is tested on a single board computer (SBC) prototype developed on a popular Raspberry Pi 4 Model B. Experimental F-score results show impressive real-time AED performance. Furthermore, different parameters of the model are optimised and it is used to classify the various acoustic events from Freiburg-106 event dataset. NOISEX-92 dataset is combined with the clean event dataset which is used to train the model. Comparison of AED performance in terms of F-score at different signal to noise ratios (SNRs) is carried out between GT-DNN and baseline Mel-scale filter bank energy features, and improved results are obtained with the GT-DNN method. Moreover, the confidence scores for 10 different classes of events are evaluated in presence of worst category babble noise at 0 dB SNR and excellent classification results are obtained using the proposed method. Detailed analysis and results are given in support of each claim.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Akhtar Z, Falk TH (2017) Audio-visual multimedia quality assessment: a comprehensive survey. IEEE Access 5:21090–21117

    Article  Google Scholar 

  2. Al-Karawi KA, Mohammed DY (2021) Improving short utterance speaker verification by combining mfcc and entrocy in noisy conditions. Multimed Tools Appl 80(14):22231–22249

    Article  Google Scholar 

  3. Baker MR, Patil RB (1998) Universal approximation theorem for interval neural networks. Reliab Comput 4(3):235–239

    Article  MathSciNet  Google Scholar 

  4. Boddapati V, Petef A, Rasmusson J, Lundberg L (2017) Classifying environmental sounds using image recognition networks. Procedia Comput Sci 112:2048–2056

    Article  Google Scholar 

  5. Casasanta G, Petenko I, Mastrantonio G, Bucci S, Conidi A, Di Lellis AM, Sfoglietti G, Argentini S (2018) Consumer drones targeting by sodar (acoustic radar). IEEE Geosci Remote Sens Lett 15(11):1692–1694

    Article  Google Scholar 

  6. Chandrakala S, Jayalakshmi SL (2019) Environmental audio scene and sound event recognition for autonomous surveillance: a survey and comparative studies. ACM Comput Surv (CSUR) 52(3):1–34

    Article  Google Scholar 

  7. Derczynski L (2016) Complementarity, f-score, and nlp evaluation. In: Proceedings of the tenth international conference on language resources and evaluation (LREC’16), pp 261–266

  8. Du X, El-Khamy M, Lee J, Davis L (2017) Fused dnn: a deep neural network fusion approach to fast and robust pedestrian detection. In: 2017 IEEE Winter conference on applications of computer vision (WACV). IEEE, pp 953–961

  9. Er PV, Tan KK (2018) Non-intrusive fall detection monitoring for the elderly based on fuzzy logic. Measurement 124:91–102

    Article  Google Scholar 

  10. Fayek HM (2016) Speech processing for machine learning: filter banks mel-frequency cepstral coefficients (mfccs) and what’s in-between

  11. Foggia P, Petkov N, Saggese A, Strisciuglio N, Vento M (2015) Reliable detection of audio events in highly noisy environments. Pattern Recogn Lett 65:22–28

    Article  Google Scholar 

  12. Greco A, Petkov N, Saggese A, Vento M (2020) Aren: a deep learning approach for sound event recognition using a brain inspired representation. In: IEEE transactions on information forensics and security

  13. Imoto K (2018) Introduction to acoustic event and scene analysis. Acoust Sci Technol 39(3):182–188

    Article  Google Scholar 

  14. Khattree R, Naik DN (2002) Andrews plots for multivariate data: some new suggestions and applications. J Stat Plan Inference 100(2):411–425

    Article  MathSciNet  Google Scholar 

  15. Kiktova-Vozarikova E, Juhar J, Cizmar A (2015) Feature selection for acoustic events detection. Multimed Tools Appl 74(12):4213–4233

    Article  Google Scholar 

  16. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980

  17. Komatsu T, Toizumi T, Kondo R, Senda Y (2016) Acoustic event detection method using semi-supervised non-negative matrix factorization with a mixture of local dictionaries. In: Proceedings of the detection and classification of acoustic scenes and events 2016 workshop (DCASE2016), pp 45–49

  18. Krishnamurthy N, Hansen JHL (2009) Babble noise: modeling, analysis, and applications. IEEE Trans Audio Speech Lang Process 17(7):1394–1407

    Article  Google Scholar 

  19. Lee D, Lee S, Han Y, Lee K (2017) Ensemble of convolutional neural networks for weakly-supervised sound event detection using multiple scale input. In: Detection and classification of acoustic scenes and events (DCASE)

  20. Li E, Zhou Z, Chen X (2018) Edge intelligence: on-demand deep learning model co-inference with device-edge synergy. In: Proceedings of the 2018 workshop on mobile edge communications, pp 31–36

  21. Lozano-Diez A, Zazo R, Toledano DT, Gonzalez-Rodriguez J (2017) An analysis of the influence of deep neural network (dnn) topology in bottleneck feature based language recognition. Plos One 12(8):e0182580

    Article  Google Scholar 

  22. Ma J, Wang R, Ji W, Zheng H, Zhu E, Yin J (2019) Relational recurrent neural networks for polyphonic sound event detection. Multimed Tools Appl 78(20):29509–29527

    Article  Google Scholar 

  23. McLoughlin I, Zhang H, Xie Z, Song Y, Xiao W (2015) Robust sound event classification using deep neural networks. IEEE/ACM Trans Audio Speech Lang Process 23(3):540–552

    Article  Google Scholar 

  24. Mondal S, Barman AD (2020) Speech activity detection using time-frequency auditory spectral pattern. Appl Acoust 167:107403

    Article  Google Scholar 

  25. Moore BCJ, Glasberg BR (1983) Suggested formulae for calculating auditory-filter bandwidths and excitation patterns. J Acoust Soc Am 74(3):750–753

    Article  Google Scholar 

  26. Mqtt: The standard for iot messaging. https://mqtt.org/

  27. Mulimani M, Koolagudi SG (2019) Segmentation and characterization of acoustic event spectrograms using singular value decomposition. Expert Syst Appl 120:413–425

    Article  Google Scholar 

  28. Patterson RD, Nimmo-Smith I, Holdsworth J, Rice P (1987) An efficient auditory filterbank based on the gammatone function. In: A meeting of the IOC Speech Group on auditory modelling at RSRE, vol 2

  29. Piczak KJ (2015) Esc: dataset for environmental sound classification. In: Proceedings of the 23rd ACM international conference on multimedia, pp 1015–1018

  30. Proakis JG, Manolakis DG (2004) Digital signal processing. PHI Publication, New Delhi

    Google Scholar 

  31. Samanta A, Saha A, Satapathy SC, Fernandes SL, Zhang Y -D (2020) Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset. Pattern Recognit Lett 135:293–298

    Article  Google Scholar 

  32. Sharan RV, Moir TJ (2019) Acoustic event recognition using cochleagram image and convolutional neural networks. Appl Acoust 148:62–66

    Article  Google Scholar 

  33. Slaney M et al (1993) An efficient implementation of the patterson-holdsworth auditory filter bank. Apple Computer, Perception Group, Tech. Rep, 35(8)

  34. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  35. Stephane M (1999) A wavelet tour of signal processing. The Sparse Way

  36. Stork JA, Spinello L, Silva J, Arras KO (2012) Audio-based human activity recognition using non-markovian ensemble voting. In: 2012 IEEE RO-MAN: the 21st IEEE international symposium on robot and human interactive communication. IEEE, pp 509–514

  37. Upc-talp database of isolated meeting-room acoustic events. http://catalog.elra.info/en-us/repository/browse/ELRA-S0268/

  38. Varga A, Steeneken HJM (1993) Assessment for automatic speech recognition: Ii. noisex-92: a database and an experiment to study the effect of additive noise on speech recognition systems. Speech Commun 12(3):247–251

    Article  Google Scholar 

  39. Wang DL, Brown GJ (2006) Computational auditory scene analysis: principles, algorithms, and applications. Wiley-IEEE Press

  40. Wang C -Y, Wang J -C, Santoso A, Chiang C -C, Wu C-H (2017) Sound event recognition using auditory-receptive-field binary pattern and hierarchical-diving deep belief network. IEEE/ACM Trans Audio Speech Lang Process 26(8):1336–1351

    Article  Google Scholar 

  41. Wang W, Yuan X, Wu X, Liu Y (2017) Fast image dehazing method based on linear transformation. IEEE Trans Multimed 19(6):1142–1155

    Article  Google Scholar 

  42. Xia X, Togneri R, Sohel F, Zhao Y, Huang D (2019) A survey: neural network-based deep learning for acoustic event detection. Circ Syst Signal Process 38(8):3433–3453

    Article  Google Scholar 

  43. Yegnanarayana B (2009) Artificial neural networks. PHI Learning Pvt Ltd.

  44. Zhang Z, Sabuncu M (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. Adv Neural Inf Process Syst 31:8778–8788

    Google Scholar 

  45. Zhao X, Wang DL (2013) Analyzing noise robustness of mfcc and gfcc features in speaker identification. In: IEEE international conference on acoustics, speech and signal processing. IEEE, p 2013

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sujoy Mondal.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mondal, S., Barman, A.D. Human auditory model based real-time smart home acoustic event monitoring. Multimed Tools Appl 81, 887–906 (2022). https://doi.org/10.1007/s11042-021-11455-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11455-1

Keywords

Navigation