Abstract
Identifying the smoking status of a speaker from speech has a range of applications including smoking status validation, smoking cessation tracking, and speaker profiling. Previous research on smoking status identification mainly focuses on employing the speaker's low-level acoustic features such as fundamental frequency (F0), jitter, and shimmer. However, the use of high-level acoustic features, such as Mel Frequency Cepstral Coefficients (MFCC) and filter bank (Fbank) for smoking status identification, has rarely been explored. In this study, we utilise both high-level acoustic features (i.e., MFCC, Fbank) and low-level acoustic features (i.e., F0, jitter, shimmer) for smoking status identification. Furthermore, we propose a deep neural network approach for smoking status identification by employing ResNet along with these acoustic features. We also explore a data augmentation technique for smoking status identification to further improve the performance. Finally, we present a comparison of identification accuracy results for each feature settings, and obtain the best accuracy of 82.3%, a relative improvement of 12.7% and 29.8% on the initial audio classification approach and rule-based approach, respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Poorjam, A.H., Bahari, M.H., et al.: Multitask speaker profiling for estimating age, height, weight and smoking habits from spontaneous telephone speech signals. In: 2014 4th International Conference on Computer and Knowledge Engineering (ICCKE), pp. 7–12 (2014)
Murphy, C.H., Doyle, P.C.: The effects of cigarette smoking on voice-fundamental frequency. Otolaryngol. Neck Surg. 97(4), 376–380 (1987). https://doi.org/10.1177/019459988709700406
Traunmüller, H., Eriksson, A.: The frequency range of the voice fundamental in the speech of male and female adults. Dep. Linguist. Univ. Stock. 97, 1905191–1905195 (1994)
Gonzalez, J., Carpi, A.: Early effects of smoking on the voice: a multidimensional study. Med. Sci. Monit. 10(12) (2004)
Guimarães, I., Abberton, E.: Health and voice quality in smokers: an exploratory investigation. Logop. Phoniatr. Vocology 30(3–4), 185–191 (2005). https://doi.org/10.1080/14015430500294114
Vincent, I., Gilbert, H.R.: The effects of cigarette smoking on the female voice. Logop. Phoniatr. Vocology 37(1), 22–32 (2012). https://doi.org/10.3109/14015439.2011.638673
Horii and Sorenson: Cigarette smoking and voice fundamental frequency. J. Commun. Disord. 15, 135–144 (1982)
Awan, S.N., Morrow, D.L.: Videostroboscopic characteristics of young adult female smokers vs. nonsmokers. J. Voice 21(2), 211–223 (2007). https://doi.org/10.1016/j.jvoice.2005.10.009
Dirk, L., Braun, A.: Voice parameter changes in smokers during abstinence from cigarette smoking. In: Proceedings 17th International Congress Phonetic Sciences (ICPhS 2011), August, pp. 1–3 (2011)
Zealouk, O., Satori, H., Hamidi, M., Laaidi, N., Satori, K.: Vocal parameters analysis of smoker using Amazigh language. Int. J. Speech Technol. 21(1), 85–91 (2018). https://doi.org/10.1007/s10772-017-9487-0
Pinar, D., Cincik, H., Erkul, E., Gungor, A.: Investigating the effects of smoking on young adult male voice by using multidimensional methods. J. Voice 30(6), 721–725 (2016). https://doi.org/10.1016/j.jvoice.2015.07.007
Simberg, S., Udd, H., Santtila, P.: Gender differences in the prevalence of vocal symptoms in smokers. J. Voice 29(5), 588–591 (2015)
Lee, L., Stemple, J.C., Geiger, D., Goldwasser, R.: Effects of environmental tobacco smoke on objective measures of voice production. Laryngoscope 109(9), 1531–1534 (1999). https://doi.org/10.1097/00005537-199909000-00032
Braun, A.: The effect of cigarette smoking on vocal parameters, ESCA work. In: Automatic Speaker Recognition, Identification, Verification ASRIV 1994, pp. 161–164 (2019)
Ma, Z., Bullen, C., Chu, J.T.W., Wang, R., Wang, Y., Singh, S.: Towards the objective speech assessment of smoking status based on voice features: a review of the literature. J. Voice (2021)
Poorjam, A.H., Hesaraki, S., Safavi, S., van Hamme, H., Bahari, M.H.: Automatic smoker detection from telephone speech signals. In: International Conference on Speech and Computer, pp. 200–210 (2017)
Han, S., Leng, F., Jin, Z.: Speech emotion recognition with a ResNet-CNN-transformer parallel neural network. In: 2021 International Conference on Communications, Information System and Computer Engineering (CISCE), pp. 803–807 (2021)
Hershey, S. et al.: CNN architectures for large-scale audio classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 131–135 (2017)
Liu, Y., Song, Y., McLoughlin, I., Liu, L., Dai, L.: An effective deep embedding learning method based on dense-residual networks for speaker verification. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6683–6687 (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Pu, J., Panagakis, Y., Pantic, M.: Learning separable time-frequency filterbanks for audio classification. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3000–3004 (2021)
Fujioka, T., Homma, T., Nagamatsu, K.: Meta-learning for speech emotion recognition considering ambiguity of emotion labels. Proc. Interspeech 2020, 2332–2336 (2020)
Tang, R., Lin, J.: Deep residual learning for small-footprint keyword spotting. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5484–5488 (2018)
Dave, N.: Feature extraction methods LPC, PLP and MFCC in speech recognition. Int. J. Adv. Res. Eng. Technol. 1(6), 1–4 (2013)
Mittal, V.K., Yegnanarayana, B.: Production features for detection of shouted speech. In: 2013 IEEE 10th Consumer Communications and Networking Conference (CCNC), pp. 106–111 (2013)
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)
Yoshioka, T., Ragni, A., Gales, M.J.F.: Investigation of unsupervised adaptation of DNN acoustic models with filter bank input. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6344–6348 (2014)
Chai, L., Sprecher, A.J., Zhang, Y., Liang, Y., Chen, H., Jiang, J.J.: Perturbation and nonlinear dynamic analysis of adult male smokers. J. Voice 25(3), 342–347 (2011). https://doi.org/10.1016/j.jvoice.2010.01.006
Awan, S.N.: The effect of smoking on the dysphonia severity index in females. Folia Phoniatr. Logop. 63(2), 65–71 (2011). https://doi.org/10.1159/000316142
Brandschain, L., Cieri, C., Graff, D., Neely, A., Walker, K.: Speaker recognition: building the mixer 4 and 5 Corpora. In: LREC (2008)
Brandschain, L., Graff, D., Cieri, C., Walker, K., Caruso, C., Neely, A.: The mixer 6 corpus: resources for cross-channel and text independent speaker recognition. In: Proceedings of LREC (2010)
Boersma, P.: Praat, a system for doing phonetics by computer. Glot. Int. 5(9), 341–345 (2001)
Vásquez-Correa, J.C., Orozco-Arroyave, J.R., Bocklet, T., Nöth, E.: Towards an automatic evaluation of the dysarthria level of patients with Parkinson’s disease. J. Commun. Disord. 76, 21–36 (2018)
Park, D.S. et al.: Specaugment: a simple data augmentation method for automatic speech recognition. arXiv Prepr. arXiv:1904.08779 (2019)
Paszke, A. et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8024–8035 (2019)
Ruder, S.: An overview of gradient descent optimisation algorithms. arXiv Prepr. arXiv:1609.04747 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ma, Z. et al. (2022). Automatic Speech-Based Smoking Status Identification. In: Arai, K. (eds) Intelligent Computing. SAI 2022. Lecture Notes in Networks and Systems, vol 508. Springer, Cham. https://doi.org/10.1007/978-3-031-10467-1_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-10467-1_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-10466-4
Online ISBN: 978-3-031-10467-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)