Skip to main content
Log in

Patient State Recognition System for Healthcare Using Speech and Facial Expressions

  • Mobile & Wireless Health
  • Published:
Journal of Medical Systems Aims and scope Submit manuscript

Abstract

Smart, interactive healthcare is necessary in the modern age. Several issues, such as accurate diagnosis, low-cost modeling, low-complexity design, seamless transmission, and sufficient storage, should be addressed while developing a complete healthcare framework. In this paper, we propose a patient state recognition system for the healthcare framework. We design the system in such a way that it provides good recognition accuracy, provides low-cost modeling, and is scalable. The system takes two main types of input, video and audio, which are captured in a multi-sensory environment. Speech and video input are processed separately during feature extraction and modeling; these two input modalities are merged at score level, where the scores are obtained from the models of different patients’ states. For the experiments, 100 people were recruited to mimic a patient’s states of normal, pain, and tensed. The experimental results show that the proposed system can achieve an average 98.2 % recognition accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Park, C., et al., M2M-based smart health service for human UI/UX using motion recognition. Cluster Comput. 18:221–232, 2015.

    Article  Google Scholar 

  2. Chen, M., Ma, Y., Song, J., Lai, C., Hu, B., Smart clothing: connecting human with clouds and big data for sustainable health monitoring. ACM/Springer Mobile Netw. Appl., 2016. doi:10.1007/s11036-016-0745-1.

  3. Solanas, et al., Smart health: A context aware health paradigm within smart cities. IEEE Commun. Mag. 52(8):74–81, 2014.

    Article  Google Scholar 

  4. Grunerbl, A., et al., Smartphone based recognition of States and state changes in bipolar disorder patients. IEEE J. Biomed. Health Inform. 19(1):140–148, 2015.

    Article  PubMed  Google Scholar 

  5. Hossain, M. S.: Patient status monitoring for smart home health-care. In: IEEE ICME 2016, Seattle (2016)

  6. Muhammad, G., Automatic speech recognition using interlaced derivative pattern for cloud based healthcare system. Clust. Comput. 18(2):795–802, 2015.

    Article  Google Scholar 

  7. Ma, Y., et al., Robot and cloud-assisted multi-modal healthcare system. Clust. Comput. 18(3):1295–1306, 2015.

    Article  Google Scholar 

  8. Hassanalieragh, M., et al.: Health monitoring and management using internet-of-things (IoT) sensing with cloud-based processing: opportunities and challenges. In: IEEE International Conference on Services Computing, pp. 285–292 (2015)

  9. Jara, J., Zamora-Izquierdo, M. A., Skarmeta, A. F., Interconnection framework for mHealth and remote monitoring based on the internet of things. IEEE J. Sel. Areas Commun. 31(9):47–65, 2013.

    Article  Google Scholar 

  10. Hossain, M. S., and Muhammad, G., Cloud-assisted industrial internet of things (IIoT)-enabled framework for health monitoring. Comput. Netw. 101(2016):192–202, 2016.

    Article  Google Scholar 

  11. Henze, M., Hermerschmidt, L., Kerpen, D., Häußling, R., Rumpe, B., Wehrle, K., A comprehensive approach to privacy in the cloud-based internet of things. Futur. Gener. Comput. Syst. 56(2016):701–718, 2016.

    Article  Google Scholar 

  12. Hossain, M. S., Cloud-supported cyber–physical localization framework for patients monitoring. IEEE Syst. J., 1–10, 2015. doi:10.1109/JSYST.2015.2470644.

  13. Rabiner, L., and Juang, B. H., Fundamentals of Speech Recognition. Englewood Cliffs: Prentice-Hall, 1993.

  14. Campbell, J. P., Speaker recognition: a tutorial. Proc. IEEE 85(9):1437–1462, 1997.

    Article  Google Scholar 

  15. Muhammad, G., et al.: Environment recognition using selected MPEG-7 audio features and mel-frequency cepstral coefficients. In: International Conference on Digital Telecommunications (ICDT10), pp. 11–16, Greece (2010)

  16. Godino-Llorente, J. I., Gomes-Vilda, P., Blanco-Velasco, M., Dimensionality reduction of a pathological voice quality assessment system based on Gaussian mixture models and short-term cepstral parameters. IEEE Trans. Biomed. Eng. 53(10):1943–1953, 2006.

    Article  PubMed  Google Scholar 

  17. Muhammad, G., et al., Multi directional regression (MDR) based features for automatic voice disorder detection. J. Voice. 26(6):817.e19–817.e27, 2012.

    Article  Google Scholar 

  18. Muhammad, G., et al., Spectro-temporal directional derivative based automatic speech recognition for a serious game scenario. Multimedia Tools Appl. 74(14):5313–5327, 2015.

    Article  Google Scholar 

  19. Viola, P., and Jones, M. J., Robust real-time face detection. Int. J. Comput. Vis. 57(2):137–154, 2004.

    Article  Google Scholar 

  20. Ahonen, T., Hadid, A., Pietikäinen, M., Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12):2037–2041, 2006.

    Article  PubMed  Google Scholar 

  21. Chen, J., et al., WLD: A robust local image descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 32(9): 1705–1720, 2010.

    Article  PubMed  Google Scholar 

  22. Tan, X., and Triggs, B., Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 19(6):1635–1650, 2010.

    Article  PubMed  Google Scholar 

  23. Shi, X., et al., Towards interactive medical content delivery between simulated body sensor networks and practical data center. J. Med. Syst. 40:214, 2016.

    Article  PubMed  Google Scholar 

  24. Bishop, C., Pattern Recognition and Machine Learning. New York: Springer, 2006. ISBN 978-0-387-31073-2.

  25. González-Valenzuela, S., Chen, M., Leung, V. C. M., Mobility support for health monitoring at home using wearable sensors. IEEE Trans. Inf. Technol. Biomed. 15(4):539–549, 2011.

    Article  PubMed  Google Scholar 

  26. Chen, M., Ma, Y., Wang, J., Dung, O. M., Song, E., Enabling comfortable sports therapy for patient: a novel lightweight durable and portable ECG monitoring system. Healthcom, 271–273, 2013.

  27. Li, Y., Dai, W., Ming, Z., Qiu, M., Privacy protection for preventing data over collection in smart city. IEEE Trans. Comput. 610 65(5):1339–1350, 2016.

    Article  Google Scholar 

Download references

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University, Riyadh, Saudi Arabia for funding this work through the research group project no. RGP-228.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Shamim Hossain.

Additional information

This article is part of the Topical Collection on Mobile & Wireless Health

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hossain, M.S. Patient State Recognition System for Healthcare Using Speech and Facial Expressions. J Med Syst 40, 272 (2016). https://doi.org/10.1007/s10916-016-0627-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-016-0627-x

Keywords

Navigation