Abstract
This paper address the problem of sign language video annotation. Nowadays sign language segmentation is manually performed. This is time consuming, error prone and no reproducible. In this paper we intend to provide an automatic approach to segment signs. We use a particle filter based approach to track hands and head. Motion features are used to classify segments performed with one or two hands and to detect events. Events that have been detected in the middle of a sign are removed considering hand shape features. Hand shape is characterized using similarity measurements. Evaluation has been performed and has shown the performance and limitation of the proposed approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ong, S., Ranganath, S.: Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Transactions on Pattern Analysis and Machine Intellignece 27, 873–891 (2005)
Imagawa, K., Lu, S., Igi, S.: Color-based hands tracking system for sign language recognition. In: [3], pp. 462–467
Starner, T., Pentland, A.: Real-time american sign language recognition from video using hidden markov models. In: Proc. International Symposium on Computer Vision, pp. 265–270 (1995)
Zieren, J., Canzler, U., Bauer, B., Kraiss, K.: Sign language recognition. In: Advanced Man-Machine Interaction, pp. 95–139 (2006)
Grobel, K., Assan, M.: Isolated sign language recognition using hidden markov models. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 1, pp. 162–167. IEEE (1997)
Braffort, A., Choisier, A., Collet, C., Dalle, P., Gianni, F., Lenseigne, B., Segouat, J.: Toward an annotation software for video of sign language, including image processing tools and signing space modelling. In: Proc. of 4th Int. Conf. on Language Resources and Evaluation, LREC, Lisbon, Portugal, vol. 1, pp. 201–203 (2004)
Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., Sloetjes, H.: Elan: a professional framework for multimodality research. In: Proc. of the 5th Int. Conf. on Language Resources and Evaluation (LREC 2006), pp. 1556–1559 (2006)
Dreuw, P., Ney, H.: Towards automatic sign language annotation for the elan tool. In: LREC Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora, Morocco (2008)
Collet, C., Gonzalez, M., Milachon, F.: Distributed system architecture for assisted annotation of video corpora. In: International Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (LREC), Valletta, Malte, pp. 49–52 (2010)
Yang, R., Sarkar, S., Loeding, B., Karshmer, A.: Efficient Generation of Large Amounts of Training Data for Sign Language Recognition: A Semi-automatic Tool. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2006. LNCS, vol. 4061, pp. 635–642. Springer, Heidelberg (2006)
Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous sign language sentences using iterated conditional modes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2583–2590 (2009)
Lefebvre-Albaret, F., Dalle, P.: Body Posture Estimation in Sign Language Videos. In: Kopp, S., Wachsmuth, I. (eds.) GW 2009. LNCS, vol. 5934, pp. 289–300. Springer, Heidelberg (2010)
Gonzalez, M., Collet, C.: Robust body parts tracking using particle filter and dynamic template. In: IEEE Int. Conf. on Image Processing, Belgum (2011)
Isard, M., Blake, A.: Condensation-conditional density propagation for visual tracking. International Journal of Computer Vision 29, 5–28 (1998)
MacCormick, J., Blake, A.: A probabilistic exclusion principle for tracking multiple objects. International Journal of Computer Vision 39, 57–71 (2000)
Gonzalez, M., Collet, C.: Head tracking and hand segmentation during hand over face occlusion in sign language. In: Pub, S.V. (ed.) International Workshop on Sign, Gesture, and Activity (SGA) in Conjunction with ECCV (2010)
LS-COLIN: 13:08 - 15:15 le 11 septembre 2001 par Nasredine Chab (2002), http://corpusdelaparole.in2p3.fr/spip.php?article30&ldf_id=oai:crdo.vjf.cnrs.fr:crdo-FSL-CUC020_SOUND
Boutora, L., Braffort, A.: DEfi Geste Langue des Signes. Corpus DEGELS1. Corpus ID oai:crdo.fr:crdo000767: Video en LSF, informateur A (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gonzalez, M., Collet, C. (2012). Sign Segmentation Using Dynamics and Hand Configuration for Semi-automatic Annotation of Sign Language Corpora. In: Efthimiou, E., Kouroupetroglou, G., Fotinea, SE. (eds) Gesture and Sign Language in Human-Computer Interaction and Embodied Communication. GW 2011. Lecture Notes in Computer Science(), vol 7206. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34182-3_19
Download citation
DOI: https://doi.org/10.1007/978-3-642-34182-3_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34181-6
Online ISBN: 978-3-642-34182-3
eBook Packages: Computer ScienceComputer Science (R0)