Skip to main content
Log in

A synthetic training framework for providing gesture scalability to 2.5D pose-based hand gesture recognition systems

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

The use of hand gestures offers an alternative to the commonly used human computer interfaces (i.e., keyboard, mouse, gamepad), providing a more intuitive way of navigating among menus and in multimedia applications. One of the most difficult issues when designing a hand gesture recognition system is to introduce new detectable gestures without high cost, this is known as gesture scalability. Commonly, the introduction of new gestures needs a recording session of them, involving real subjects in the process. This paper presents a training framework for hand posture detection systems based on a learning scheme fed with synthetically generated range images. Different configurations of a 3D hand model result in sets of synthetic subjects, which have shown good performance in the separation of gestures from several dictionaries of the State of Art. The proposed approach allows the learning of new dictionaries with no need of recording real subjects, so it is fully scalable in terms of gestures. The obtained accuracy rates for the dictionaries evaluated are comparable to, and for some cases better than, the ones reported for different real subjects training schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://wii.com.

  2. http://www.xbox.com/kinect/.

  3. http://playstation.com/psmove/.

  4. http://www.mesa-imaging.ch/.

  5. http://www-vpu.eps.uam.es/DS/HGds.

  6. http://www.sematos.eu/lse.html.

References

  1. Laviola, J.J.: Bringing vr and spatial 3d interaction to the masses through video games. IEEE Comput. Graph. Appl. 28(5), 10–15 (2008)

  2. Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(3), 311–324 (2007)

    Article  Google Scholar 

  3. Holte, M.B., Stoerring, M.: Pointing and command gestures under mixed illumination conditions: video sequence dataset. http://www-prima.inrialpes.fr/fgnet/data/03-pointing/index.html (2004)

  4. Martin Larsson, D.K.: Isabel Serrano Vicente, Cvap arm/hand activity database. http://www.nada.kth.se/~danik/gesture_database/ (2011)

  5. Ho, M.-F., Tseng, C.-Y., Lien, C.-C., Huang, C.-L.: A multi-view vision-based hand motion capturing system. Pattern Recognit. 44, 443–453 (2011)

    Article  MATH  Google Scholar 

  6. Causo, A., Matsuo, M., Ueda, E., Takemura, K., Matsumoto, Y., Takamatsu, J., Ogasawara, T.: Hand pose estimation using voxel-based individualized hand model. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 451–456 (2009)

  7. Causo, A., Ueda, E., Kurita, Y., Matsumoto, Y., Ogasawara, T.: Model-based hand pose estimation using multiple viewpoint silhouette images and unscented kalman filter. In: The 17th IEEE International Symposium on Robot and Human Interactive Communication, pp. 291–296 (2008)

  8. Soutschek, S., Penne, J., Hornegger, J., Kornhuber, J.: 3D gesture-based scene navigation in medical imaging applications using time-of-flight cameras. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–6 (2008)

  9. Kollorz, E., Penne, J., Hornegger, J., Barke, A.: Gesture recognition with a time-of-flight camera. Int. J. Intell. Syst. Technol. Appl. 5(3/4), 334–343 (2008)

    Google Scholar 

  10. Molina, J., Escudero-Vi nolo, M., Signoriello, A., Pardás, M., Ferrán, C., Bescós, J., Marqués, F., Martínez, J.M.: Real-time user independent hand gesture recognition from time-of-flight camera video using static and dynamic models. Mach. Vis. Appl. 1, 187–204 (2013)

  11. Liu, X., Fujimura, K.: Hand gesture recognition using depth data. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 529–534 (2004)

  12. Molina, J., Pajuelo, J.A., Escudero-Vi nolo, M., Bescós, J., Martínez, J.M.: A natural and synthetic corpus for benchmarking of hand gesture recognition systems. Mach. Vis. Appl. 25(4), 943–954 (2014)

    Article  Google Scholar 

  13. Stenger, B., Thayananthan, A., Torr, P., Cipolla, R.: Estimating 3D hand pose using hierarchical multi-label classification. Image Vis. Comput. 25(12), 1885–1894 (2007). The age of human computer interaction

    Article  Google Scholar 

  14. Baysal, C.: Implementation of fuzzy similarity methods for manipulative hand posture evaluation. In: IEEE International Conference on Systems Man and Cybernetics, pp. 1320–1324 (2010)

  15. Erol, A., Bebis, G., Nicolescu, M., Boyle, R.D., Twombly, X.: Vision-based hand pose estimation: a review. Comput. Vis. Image Underst. 108(1–2), 52–73 (2007)

    Article  Google Scholar 

  16. Ge, S., Yang, Y., Lee, T.: Hand gesture recognition and tracking based on distributed locally linear embedding. In: IEEE Conference on Robotics, Automation and Mechatronics, pp. 1–6 (2006)

  17. Hu, M.-K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javier Molina.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Molina, J., Martínez, J.M. A synthetic training framework for providing gesture scalability to 2.5D pose-based hand gesture recognition systems. Machine Vision and Applications 25, 1309–1315 (2014). https://doi.org/10.1007/s00138-014-0620-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-014-0620-7

Keywords

Navigation