Skip to main content
Log in

Real-Time Tracking and Recognition Systems for Interactive Telemedicine Health Services

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Recent changes affecting the health industry include the digitization of medical information as well as the exchange of medical information through a network-connected medical infrastructure. In this paper, we propose a real-time tracking and recognition system for interactive telemedicine health services. The proposed method is a methodology for both hand and finger detection applied to posture recognition in telemedicine. The detected hand or finger can be used to implement a non-contact mouse in the machine-to-machine. This technology can be used to control telemedicine health devices such as a public healthcare system, pedometer health information reader, glucose-monitoring device, and blood pressure gauge. Skin color is used to segment the hand region from the background, and the contour is extracted from the segmented hand. Contour analysis provides the locations of the fingertips on the hand. Fingertip tracking is performed using a constant velocity model with a pixel-labeling approach. From the tracking process, several hand features can be extracted and then fed into a finite state classifier to identify the hand configuration. The hand can be classified into many gesture classes or several different movement directions. Using this method, we performed an extensive experiment and obtained a very encouraging result. It is shown that using the method used in previous studies, some of the points are lost, whereas using the proposed method described in this paper, all lost points are recovered with no or little displacement error. Ultimately, this paper provides empirical verification of the adequacy and validity of the proposed system for telemedicine health services. Accordingly, the satisfaction and quality of services will improve gesture recognition for interactive telemedicine health services.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Kang, S. K., Chung, K. Y., Rim, K. W., & Lee, J. H. (2011). Development of real-time gesture recognition system using visual interaction. In Proceedings of the international conference IT convergence and security 2011 (LNEE 120) (pp. 295–306). Berlin: Springer.

  2. Rho, M. J., Jang, K. S., Chung, K. Y., & Choi, I. Y. (2013). Comparison of knowledge, attitudes, and trust for the use of personal health information in clinical research. Multimedia Tools and Applications. doi:10.1007/s11042-013-1772-6.

  3. Jung, E. Y., Kim, J. H., Chung, K. Y., & Park, D. K. (2013). Home health gateway based healthcare services through u-health platform. Wireless Personal Communications, 73(2), 207–218.

    Article  Google Scholar 

  4. Kang, S. K., Chung, K. Y., Ryu, J. K., Rim, K. W., & Lee, J. H. (2013). Bio-Interactive healthcare service system using lifelog based context computing. Wireless Personal Communications, 73(2), 341–351.

    Article  Google Scholar 

  5. Kim, S. H., & Chung, K. Y. (2013). Medical information service system based on human 3D anatomical model. Multimedia Tools and Applications. doi:10.1007/s11042-013-1584-8.

  6. Jung, E. Y., Kim, J. H., Chung, K. Y., & Park, D. K. (2013). Mobile healthcare application with EMR interoperability for diabetes patients. Cluster Computing. doi:10.1007/s10586-013-0315-2.

  7. Kang, S. K., Chung, K. Y., & Lee, J. H. (2014). Development of head detection and tracking systems for visual surveillance. Personal and Ubiquitous Computing, 18(3), 515–522.

    Article  Google Scholar 

  8. Davis, J., & Shah, M. (1994). Visual gesture recognition. Proceedings of Vision, Image, and Signal Processing, 141(2), 101–106.

    Article  Google Scholar 

  9. Vogler, C., & Metaxas, D. (2001). A framework for recognizing the simultaneous aspects of American sign language. Journal of Computer Vision and Image Understanding, 81, 358–384.

    Article  MATH  Google Scholar 

  10. Fillbrandt, H., Akyol, S., & Kraiss, K. F. (2003). Extraction of 3D hand shape and posture from image sequences for sign language recognition. In Proceedings of the IEEE international workshop on analysis and modeling of faces and gestures (Vol. 17, pp. 181–186).

  11. Jung, H., & Chung, K. Y. (2013). Discovery of automotive design paradigm using relevance feedback. Personal and Ubiquitous Computing. doi:10.1007/s00779-013-0738-z.

  12. Rehg, J., & Kanade, T. (1993). DigitEyes: Vision-based human hand tracking, school of computer science technical paper (CMU-CS-93-220). Carnegie Mellon University

  13. Ng, C. W., & Ranganath, S. (2002). Real-time gesture recognition system and application. Journal of Image and Vision Computing, 20(13–14), 993–1007.

    Article  Google Scholar 

  14. Abe, K., Saito, H., & Ozawa, S. (2002). Virtual 3D interface system via hand motion recognition from two cameras. Journal of IEEE Transactions on Systems, Man and Cybernetics, Part A, 32(4), 536–540.

    Article  Google Scholar 

  15. Kwon, K., Zhang, H., & Dornaika, F. (2001). Hand pose recovery with a single video camera. In Proceedings of the IEEE international conference on robotics and automation (pp. 3181–4261).

  16. Yang, J. G., Kim, J. K., Kang, U. G., & Lee, Y. H. (2013). Coronary heart disease optimization system on adaptive-network-based fuzzy inference system and linear discriminant analysis (ANFIS-LDA). Personal and Ubiquitous Computing. doi:10.1007/s00779-013-0737-0.

  17. Ha, O. K., Song, Y. S., Chung, K. Y., Lee, K. D., & Park, D. (2014). Relation model describing the effects of introducing RFID in the supply chain: Evidence from the food and beverage industry in South Korea. Personal and Ubiquitous Computing, 18(3), 553–561.

    Article  Google Scholar 

  18. Kim, J. Y., Chung, K. Y., & Jung, J. J. (2014). Single tag sharing scheme for multiple-object RFID applications. Multimedia Tools and Applications, 68(2), 465–477.

    Article  Google Scholar 

  19. Kim, S. H., & Chung, K. Y. (2014). 3D simulator for stability analysis of finite slope causing plane activity. Multimedia Tools and Applications, 68(2), 455–463.

    Article  Google Scholar 

  20. Shirai, Y., Tanibata, N., & Shimada, N. (2002). Extraction of hand features for recognition of sign language words, VI’2002, computer-controlled mechanical systems. Graduate School of Engineering, Osaka University

  21. Boutaba, R., Chung, K. Y., & Gen, M. (2014). Recent trends in interactive multimedia computing for industry. Cluster Computing. doi:10.1007/s10586-014-0349-0.

  22. Hamada, Y., Shimada, N., & Shirai, Y. (2004). Hand shape estimation under complex backgrounds for sign language recognition. In Proceedings of the international conference on automatic face and gesture recognition (pp. 589–594).

  23. Kim, G. H., Kim, Y. G., & Chung, K. Y. (2013). Towards virtualized and automated software performance test architecture. Multimedia Tools and Applications. doi:10.1007/s11042-013-1536-3.

  24. Lee, J., & Kunii, T. (1995). Model-based analysis of hand posture. IEEE Computer Graphics and Applications, 15(5), 77–86.

    Article  Google Scholar 

  25. Wu, Y., & Huang, T. S. (1999). Capturing articulated human hand motion: A divide-andconquer approach. In Proceedings IEEE international conference on Computer Vision (pp. 606–611). Corfu, Greece.

  26. Nölker, C., & Ritter, H. (1997). Detection of fingertips in human hand movement sequences. In I. Wachsmuth, & M. FroÈhlich (Eds.) Gesture and sign language in human–computer interaction (pp. 209–218).

  27. Kuch, J. J., & Huang, T. S. (1995). Vision-based hand modeling and tracking for virtual teleconferencing and telecollaboration. In Proceedings of IEEE international conference computer vision (pp. 666–671), Cambridge, MA.

  28. Heap, T., & Hogg, D. (1996). Towards 3D hand tracking using a deformable model. In Proceedings of IEEE international conference automatic face and gesture recognition (pp. 140–145), Killington, VT.

  29. Wu, Y., & Huang, T. S. (1999). Vision-based gesture recognition: A review. In Gesture workshop (GW 99) (pp. 103–115), France.

  30. Jung, H., & Chung, K. Y. (2013). Mining based associative image filtering using harmonic mean. Cluster Computing. doi:10.1007/s10586-013-0318-z.

  31. Chung, K. Y. (2013). Recent trends on convergence and ubiquitous computing. Personal and Ubiquitous Computing. doi:10.1007/s00779-013-0743-2.

  32. Oh, S. Y., & Chung, K. Y. (2013). Target speech feature extraction using non-parametric correlation coefficient. Cluster Computing. doi:10.1007/s10586-013-0284-5.

  33. Ko, J. W., Chung, K. Y., & Han, J. S. (2013). Model transformation verification using similarity and graph comparison algorithm. Multimedia Tools and Applications. doi:10.1007/s11042-013-1581-y.

  34. Han, J. S., Chung, K. Y., & Kim, G. J. (2013). Policy on literature content based on software as service. Multimedia Tools and Applications. doi:10.1007/s11042-013-1664-9.

  35. Graetzel, C., Grange, S., Fong, T., & Baur, C. (2003). A non-contact mouse for surgeon-computer interaction. In Journal of IEEE medical image computing and computer assisted intervention, Toronto, Canada Interface.

  36. Frigola, M., Fernandez, J., & Aranda, J. (2003). Visual human machine interface by gestures. In Proceedings of the IEEE international conference on robotics and automation (Vol. 1, pp. 386–391).

  37. Ueda, E., Matsumoto, Y., Imai, M., & Ogasawara, T. (2003). A hand-pose estimation for vision-based human interfaces. Journal of IEEE Transactions on Industrial Electronics, 50(4), 676–684.

    Article  Google Scholar 

  38. Isaacs J., & Foo, J. S. (2004). Hand pose estimation for american sign language recognition. In Proceedings of the thirty-sixth southeastern symposium on system theory (pp. 132–136).

  39. Canny, J. (1986). Computational approach to edge detection. IEEE T-PAMI, 8(6):679–698.

  40. Ouhaddi, H., & Horain, P. (1999). 3D hand gesture tracking by model registration. In Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging (pp. 70–73).

  41. Koller, D., Daniilidis, K., Thorhallson,T., & Nagel, H.-H. (1992). Model based object tracking in traffic scenes. In Proceedings of ECCV ’92. Springer-Verlag.

Download references

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2059964).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jung-Hyun Lee.

Additional information

This paper is significantly revised from an earlier version presented at [1].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kang, SK., Chung, K. & Lee, JH. Real-Time Tracking and Recognition Systems for Interactive Telemedicine Health Services. Wireless Pers Commun 79, 2611–2626 (2014). https://doi.org/10.1007/s11277-014-1784-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-014-1784-1

Keywords

Navigation