Skip to main content

Automatic Speech-Based Smoking Status Identification

  • Conference paper
  • First Online:
Intelligent Computing (SAI 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 508))

Included in the following conference series:

Abstract

Identifying the smoking status of a speaker from speech has a range of applications including smoking status validation, smoking cessation tracking, and speaker profiling. Previous research on smoking status identification mainly focuses on employing the speaker's low-level acoustic features such as fundamental frequency (F0), jitter, and shimmer. However, the use of high-level acoustic features, such as Mel Frequency Cepstral Coefficients (MFCC) and filter bank (Fbank) for smoking status identification, has rarely been explored. In this study, we utilise both high-level acoustic features (i.e., MFCC, Fbank) and low-level acoustic features (i.e., F0, jitter, shimmer) for smoking status identification. Furthermore, we propose a deep neural network approach for smoking status identification by employing ResNet along with these acoustic features. We also explore a data augmentation technique for smoking status identification to further improve the performance. Finally, we present a comparison of identification accuracy results for each feature settings, and obtain the best accuracy of 82.3%, a relative improvement of 12.7% and 29.8% on the initial audio classification approach and rule-based approach, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Poorjam, A.H., Bahari, M.H., et al.: Multitask speaker profiling for estimating age, height, weight and smoking habits from spontaneous telephone speech signals. In: 2014 4th International Conference on Computer and Knowledge Engineering (ICCKE), pp. 7–12 (2014)

    Google Scholar 

  2. Murphy, C.H., Doyle, P.C.: The effects of cigarette smoking on voice-fundamental frequency. Otolaryngol. Neck Surg. 97(4), 376–380 (1987). https://doi.org/10.1177/019459988709700406

    Article  Google Scholar 

  3. Traunmüller, H., Eriksson, A.: The frequency range of the voice fundamental in the speech of male and female adults. Dep. Linguist. Univ. Stock. 97, 1905191–1905195 (1994)

    Google Scholar 

  4. Gonzalez, J., Carpi, A.: Early effects of smoking on the voice: a multidimensional study. Med. Sci. Monit. 10(12) (2004)

    Google Scholar 

  5. Guimarães, I., Abberton, E.: Health and voice quality in smokers: an exploratory investigation. Logop. Phoniatr. Vocology 30(3–4), 185–191 (2005). https://doi.org/10.1080/14015430500294114

    Article  Google Scholar 

  6. Vincent, I., Gilbert, H.R.: The effects of cigarette smoking on the female voice. Logop. Phoniatr. Vocology 37(1), 22–32 (2012). https://doi.org/10.3109/14015439.2011.638673

    Article  Google Scholar 

  7. Horii and Sorenson: Cigarette smoking and voice fundamental frequency. J. Commun. Disord. 15, 135–144 (1982)

    Article  Google Scholar 

  8. Awan, S.N., Morrow, D.L.: Videostroboscopic characteristics of young adult female smokers vs. nonsmokers. J. Voice 21(2), 211–223 (2007). https://doi.org/10.1016/j.jvoice.2005.10.009

    Article  Google Scholar 

  9. Dirk, L., Braun, A.: Voice parameter changes in smokers during abstinence from cigarette smoking. In: Proceedings 17th International Congress Phonetic Sciences (ICPhS 2011), August, pp. 1–3 (2011)

    Google Scholar 

  10. Zealouk, O., Satori, H., Hamidi, M., Laaidi, N., Satori, K.: Vocal parameters analysis of smoker using Amazigh language. Int. J. Speech Technol. 21(1), 85–91 (2018). https://doi.org/10.1007/s10772-017-9487-0

    Article  Google Scholar 

  11. Pinar, D., Cincik, H., Erkul, E., Gungor, A.: Investigating the effects of smoking on young adult male voice by using multidimensional methods. J. Voice 30(6), 721–725 (2016). https://doi.org/10.1016/j.jvoice.2015.07.007

    Article  Google Scholar 

  12. Simberg, S., Udd, H., Santtila, P.: Gender differences in the prevalence of vocal symptoms in smokers. J. Voice 29(5), 588–591 (2015)

    Article  Google Scholar 

  13. Lee, L., Stemple, J.C., Geiger, D., Goldwasser, R.: Effects of environmental tobacco smoke on objective measures of voice production. Laryngoscope 109(9), 1531–1534 (1999). https://doi.org/10.1097/00005537-199909000-00032

    Article  Google Scholar 

  14. Braun, A.: The effect of cigarette smoking on vocal parameters, ESCA work. In: Automatic Speaker Recognition, Identification, Verification ASRIV 1994, pp. 161–164 (2019)

    Google Scholar 

  15. Ma, Z., Bullen, C., Chu, J.T.W., Wang, R., Wang, Y., Singh, S.: Towards the objective speech assessment of smoking status based on voice features: a review of the literature. J. Voice (2021)

    Google Scholar 

  16. Poorjam, A.H., Hesaraki, S., Safavi, S., van Hamme, H., Bahari, M.H.: Automatic smoker detection from telephone speech signals. In: International Conference on Speech and Computer, pp. 200–210 (2017)

    Google Scholar 

  17. Han, S., Leng, F., Jin, Z.: Speech emotion recognition with a ResNet-CNN-transformer parallel neural network. In: 2021 International Conference on Communications, Information System and Computer Engineering (CISCE), pp. 803–807 (2021)

    Google Scholar 

  18. Hershey, S. et al.: CNN architectures for large-scale audio classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 131–135 (2017)

    Google Scholar 

  19. Liu, Y., Song, Y., McLoughlin, I., Liu, L., Dai, L.: An effective deep embedding learning method based on dense-residual networks for speaker verification. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6683–6687 (2021)

    Google Scholar 

  20. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  21. Pu, J., Panagakis, Y., Pantic, M.: Learning separable time-frequency filterbanks for audio classification. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3000–3004 (2021)

    Google Scholar 

  22. Fujioka, T., Homma, T., Nagamatsu, K.: Meta-learning for speech emotion recognition considering ambiguity of emotion labels. Proc. Interspeech 2020, 2332–2336 (2020)

    Google Scholar 

  23. Tang, R., Lin, J.: Deep residual learning for small-footprint keyword spotting. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5484–5488 (2018)

    Google Scholar 

  24. Dave, N.: Feature extraction methods LPC, PLP and MFCC in speech recognition. Int. J. Adv. Res. Eng. Technol. 1(6), 1–4 (2013)

    Google Scholar 

  25. Mittal, V.K., Yegnanarayana, B.: Production features for detection of shouted speech. In: 2013 IEEE 10th Consumer Communications and Networking Conference (CCNC), pp. 106–111 (2013)

    Google Scholar 

  26. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  27. Yoshioka, T., Ragni, A., Gales, M.J.F.: Investigation of unsupervised adaptation of DNN acoustic models with filter bank input. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6344–6348 (2014)

    Google Scholar 

  28. Chai, L., Sprecher, A.J., Zhang, Y., Liang, Y., Chen, H., Jiang, J.J.: Perturbation and nonlinear dynamic analysis of adult male smokers. J. Voice 25(3), 342–347 (2011). https://doi.org/10.1016/j.jvoice.2010.01.006

    Article  Google Scholar 

  29. Awan, S.N.: The effect of smoking on the dysphonia severity index in females. Folia Phoniatr. Logop. 63(2), 65–71 (2011). https://doi.org/10.1159/000316142

    Article  MathSciNet  Google Scholar 

  30. Brandschain, L., Cieri, C., Graff, D., Neely, A., Walker, K.: Speaker recognition: building the mixer 4 and 5 Corpora. In: LREC (2008)

    Google Scholar 

  31. Brandschain, L., Graff, D., Cieri, C., Walker, K., Caruso, C., Neely, A.: The mixer 6 corpus: resources for cross-channel and text independent speaker recognition. In: Proceedings of LREC (2010)

    Google Scholar 

  32. Boersma, P.: Praat, a system for doing phonetics by computer. Glot. Int. 5(9), 341–345 (2001)

    Google Scholar 

  33. Vásquez-Correa, J.C., Orozco-Arroyave, J.R., Bocklet, T., Nöth, E.: Towards an automatic evaluation of the dysarthria level of patients with Parkinson’s disease. J. Commun. Disord. 76, 21–36 (2018)

    Article  Google Scholar 

  34. Park, D.S. et al.: Specaugment: a simple data augmentation method for automatic speech recognition. arXiv Prepr. arXiv:1904.08779 (2019)

  35. Paszke, A. et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8024–8035 (2019)

    Google Scholar 

  36. Ruder, S.: An overview of gradient descent optimisation algorithms. arXiv Prepr. arXiv:1609.04747 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Hou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, Z. et al. (2022). Automatic Speech-Based Smoking Status Identification. In: Arai, K. (eds) Intelligent Computing. SAI 2022. Lecture Notes in Networks and Systems, vol 508. Springer, Cham. https://doi.org/10.1007/978-3-031-10467-1_11

Download citation

Publish with us

Policies and ethics