Skip to main content

Sign Segmentation Using Dynamics and Hand Configuration for Semi-automatic Annotation of Sign Language Corpora

  • Conference paper
Gesture and Sign Language in Human-Computer Interaction and Embodied Communication (GW 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7206))

Included in the following conference series:

  • 1567 Accesses

Abstract

This paper address the problem of sign language video annotation. Nowadays sign language segmentation is manually performed. This is time consuming, error prone and no reproducible. In this paper we intend to provide an automatic approach to segment signs. We use a particle filter based approach to track hands and head. Motion features are used to classify segments performed with one or two hands and to detect events. Events that have been detected in the middle of a sign are removed considering hand shape features. Hand shape is characterized using similarity measurements. Evaluation has been performed and has shown the performance and limitation of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ong, S., Ranganath, S.: Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Transactions on Pattern Analysis and Machine Intellignece 27, 873–891 (2005)

    Article  Google Scholar 

  2. Imagawa, K., Lu, S., Igi, S.: Color-based hands tracking system for sign language recognition. In: [3], pp. 462–467

    Google Scholar 

  3. Starner, T., Pentland, A.: Real-time american sign language recognition from video using hidden markov models. In: Proc. International Symposium on Computer Vision, pp. 265–270 (1995)

    Google Scholar 

  4. Zieren, J., Canzler, U., Bauer, B., Kraiss, K.: Sign language recognition. In: Advanced Man-Machine Interaction, pp. 95–139 (2006)

    Google Scholar 

  5. Grobel, K., Assan, M.: Isolated sign language recognition using hidden markov models. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 1, pp. 162–167. IEEE (1997)

    Google Scholar 

  6. Braffort, A., Choisier, A., Collet, C., Dalle, P., Gianni, F., Lenseigne, B., Segouat, J.: Toward an annotation software for video of sign language, including image processing tools and signing space modelling. In: Proc. of 4th Int. Conf. on Language Resources and Evaluation, LREC, Lisbon, Portugal, vol. 1, pp. 201–203 (2004)

    Google Scholar 

  7. Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., Sloetjes, H.: Elan: a professional framework for multimodality research. In: Proc. of the 5th Int. Conf. on Language Resources and Evaluation (LREC 2006), pp. 1556–1559 (2006)

    Google Scholar 

  8. Dreuw, P., Ney, H.: Towards automatic sign language annotation for the elan tool. In: LREC Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora, Morocco (2008)

    Google Scholar 

  9. Collet, C., Gonzalez, M., Milachon, F.: Distributed system architecture for assisted annotation of video corpora. In: International Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (LREC), Valletta, Malte, pp. 49–52 (2010)

    Google Scholar 

  10. Yang, R., Sarkar, S., Loeding, B., Karshmer, A.: Efficient Generation of Large Amounts of Training Data for Sign Language Recognition: A Semi-automatic Tool. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2006. LNCS, vol. 4061, pp. 635–642. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  11. Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous sign language sentences using iterated conditional modes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2583–2590 (2009)

    Google Scholar 

  12. Lefebvre-Albaret, F., Dalle, P.: Body Posture Estimation in Sign Language Videos. In: Kopp, S., Wachsmuth, I. (eds.) GW 2009. LNCS, vol. 5934, pp. 289–300. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  13. Gonzalez, M., Collet, C.: Robust body parts tracking using particle filter and dynamic template. In: IEEE Int. Conf. on Image Processing, Belgum (2011)

    Google Scholar 

  14. Isard, M., Blake, A.: Condensation-conditional density propagation for visual tracking. International Journal of Computer Vision 29, 5–28 (1998)

    Article  Google Scholar 

  15. MacCormick, J., Blake, A.: A probabilistic exclusion principle for tracking multiple objects. International Journal of Computer Vision 39, 57–71 (2000)

    Article  MATH  Google Scholar 

  16. Gonzalez, M., Collet, C.: Head tracking and hand segmentation during hand over face occlusion in sign language. In: Pub, S.V. (ed.) International Workshop on Sign, Gesture, and Activity (SGA) in Conjunction with ECCV (2010)

    Google Scholar 

  17. LS-COLIN: 13:08 - 15:15 le 11 septembre 2001 par Nasredine Chab (2002), http://corpusdelaparole.in2p3.fr/spip.php?article30&ldf_id=oai:crdo.vjf.cnrs.fr:crdo-FSL-CUC020_SOUND

  18. Boutora, L., Braffort, A.: DEfi Geste Langue des Signes. Corpus DEGELS1. Corpus ID oai:crdo.fr:crdo000767: Video en LSF, informateur A (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gonzalez, M., Collet, C. (2012). Sign Segmentation Using Dynamics and Hand Configuration for Semi-automatic Annotation of Sign Language Corpora. In: Efthimiou, E., Kouroupetroglou, G., Fotinea, SE. (eds) Gesture and Sign Language in Human-Computer Interaction and Embodied Communication. GW 2011. Lecture Notes in Computer Science(), vol 7206. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34182-3_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34182-3_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34181-6

  • Online ISBN: 978-3-642-34182-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics