Skip to main content

ECCV 2022 Sign Spotting Challenge: Dataset, Design and Results

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Abstract

The ECCV 2022 Sign Spotting Challenge focused on the problem of fine-grain sign spotting for continuous sign language recognition. We have released and made publicly available a new dataset of Spanish sign language of around 10 h of video data in the health domain performed by 7 deaf people and 3 interpreters. The added value of this dataset over existing ones is the frame-level precise annotation of 100 signs with their corresponding glosses and variants made by sign language experts. This paper summarizes the design and results of the challenge, which attracted 79 participants, contextualizing the problem and defining the dataset, protocols and baseline models, as well as discussing top-winning solutions and future directions on the topic.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Challenge - https://chalearnlap.cvc.uab.cat/challenge/49/description/.

  2. 2.

    Dataset - https://chalearnlap.cvc.uab.cat/dataset/42/description/.

  3. 3.

    Codalab - https://codalab.lisn.upsaclay.fr.

References

  1. Albanie, Samuel, et al.: BSL-1K: scaling up co-articulated sign language recognition using mouthing cues. In: Vedaldi, Andrea, Bischof, Horst, Brox, Thomas, Frahm, Jan-Michael. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 35–53. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_3

    Chapter  Google Scholar 

  2. Alon, J., Athitsos, V., Yuan, Q., Sclaroff, S.: A unified framework for gesture recognition and spatiotemporal gesture segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1685–1699 (2009)

    Article  Google Scholar 

  3. Cai, Y., et al.: Exploiting spatial-temporal relationships for 3d pose estimation via graph convolutional networks. In: International Conference on Computer Vision (ICCV), pp. 2272–2281 (2019)

    Google Scholar 

  4. Camgoz, N.C., Hadfield, S., Koller, O., Ney, H., Bowden, R.: Neural sign language translation. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  5. Camgoz, N.C., Koller, O., Hadfield, S., Bowden, R.: Sign language transformers: joint end-to-end sign language recognition and translation. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10023–10033 (2020)

    Google Scholar 

  6. Cao, Z., Hidalgo, G., Simon, T., Wei, S., Sheikh, Y.: Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43(01), 172–186 (2021)

    Article  Google Scholar 

  7. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  8. Chen, K., et al.: Mmdetection: Open mmlab detection toolbox and benchmark. CoRR abs/1906.07155 (2019)

    Google Scholar 

  9. Cooper, H., Holt, B., Bowden, R.: Sign language recognition. In: Moeslund, T., Hilton, A., Krüger, V., Sigal, L. (eds.) Visual Analysis of Humans, Springer, London, pp. 539–562 (2011). https://doi.org/10.1007/978-0-85729-997-0_27

  10. Cui, R., Liu, H., Zhang, C.: Recurrent convolutional neural networks for continuous sign language recognition by staged optimization. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1610–1618 (2017)

    Google Scholar 

  11. Fenlon, J.B., et al.: Bsl signbank: a lexical database and dictionary of British sign language 1st edn (2014)

    Google Scholar 

  12. Forster, J., Schmidt, C., Koller, O., Bellgardt, M., Ney, H.: Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-weather. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), pp. 1911–1916 (2014)

    Google Scholar 

  13. Grishchenko, I., Bazarevsky, V.: Mediapipe holistic - simultaneous face, hand and pose prediction, on device. https://ai.googleblog.com/2020/12/mediapipe-holisticsimultaneous-face.html (2022). Accessed 18 Jul 2022

  14. Hu, H., Zhao, W., Zhou, W., Wang, Y., Li, H.: SignBERT: pre-training of hand-model-aware representation for sign language recognition. In: International Conference on Computer Vision (ICCV), pp. 11087–11096 (2021)

    Google Scholar 

  15. Jiang, S., Sun, B., Wang, L., Bai, Y., Li, K., Fu, Y.: Skeleton aware multi-modal sign language recognition. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3408–3418 (2021)

    Google Scholar 

  16. Jiang, T., Camgoz, N.C., Bowden, R.: Looking for the signs: identifying isolated sign instances in continuous video footage. In: IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1–8 (2021)

    Google Scholar 

  17. Joze, H.R.V., Koller, O.: MS-ASL: a large-scale data set and benchmark for understanding American sign language. CoRR abs/1812.01053 (2018)

    Google Scholar 

  18. Koller, O., Forster, J., Ney, H.: Continuous sign language recognition: towards large vocabulary statistical recognition systems handling multiple signers. Comput. Vis. Image Understand. 141, 108–125 (2015)

    Google Scholar 

  19. Li, D., Rodriguez, C., Yu, X., Li, H.: Word-level deep sign language recognition from video: a new large-scale dataset and methods comparison. In: Winter Conference on Applications of Computer Vision (WACV) (2020)

    Google Scholar 

  20. Li, D., Yu, X., Xu, C., Petersson, L., Li, H.: Transferring cross-domain knowledge for video sign language recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6204–6213 (2020)

    Google Scholar 

  21. Liu, Z., Zhang, H., Chen, Z., Wang, Z., Ouyang, W.: Disentangling and unifying graph convolutions for skeleton-based action recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 140–149 (2020)

    Google Scholar 

  22. Miech, A., Alayrac, J.B., Smaira, L., Laptev, I., Sivic, J., Zisserman, A.: End-to-End Learning of Visual Representations from Uncurated Instructional Videos. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  23. Momeni, L., Varol, G., Albanie, S., Afouras, T., Zisserman, A.: Watch, read and lookup: learning to spot signs from multiple supervisors. In: ACCV (2020)

    Google Scholar 

  24. Ong, E.J., Koller, O., Pugeault, N., Bowden, R.: Sign spotting using hierarchical sequential patterns with temporal intervals. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1931–1938 (2014)

    Google Scholar 

  25. Pfister, T., Charles, J., Zisserman, A.: Domain-adaptive discriminative one-shot learning of gestures. In: European Conference on Computer Vision (ECCV), vol. 8694, pp. 814–829 (2014)

    Google Scholar 

  26. Prillwitz, S.: HamNoSys Version 2.0. Hamburg Notation System for Sign Languages: An Introductory Guide. Intern. Arb. z. Gebärdensprache u. Kommunik, Signum Press, Dresden (1989)

    Google Scholar 

  27. Rastgoo, R., Kiani, K., Escalera, S.: Sign Language recognition: a deep Survey. Expert Syst. Appl. 164, 113794 (2021)

    Article  Google Scholar 

  28. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 91–99. Curran Associates, Inc. (2015)

    Google Scholar 

  29. Rong, Y., Shiratori, T., Joo, H.: Frankmocap: A monocular 3d whole-body pose estimation system via regression and integration. In: International Conference on Computer Vision Workshops (ICCVW), pp. 1749–1759 (2021)

    Google Scholar 

  30. Schembri, A.C., Fenlon, J.B., Rentelis, R., Reynolds, S., Cormier, K.: Building the British sign language corpus. Lang. Documentation Conserv. 7, 136–154 (2013)

    Google Scholar 

  31. Sincan, O.M., Keles, H.Y.: AUTSL: a large scale multi-modal Turkish sign language dataset and baseline methods. IEEE Access 8, 181340–181355 (2020)

    Article  Google Scholar 

  32. Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5686–5696 (2019)

    Google Scholar 

  33. Sánchez Pérez, J., Meinhardt-Llopis, E., Facciolo, G.: TV-L1 optical flow estimation. Image Process. Line 3, 137–150 (2013)

    Article  Google Scholar 

  34. Varol, G., Momeni, L., Albanie, S., Afouras, T., Zisserman, A.: Scaling up sign spotting through sign language dictionaries. Int. J. Comput. Vis. 1–24 (2022). https://doi.org/10.1007/s11263-022-01589-6

  35. Viitaniemi, V., Jantunen, T., Savolainen, L., Karppa, M., Laaksonen, J.: S-pot - a benchmark in spotting signs within continuous signing. In: International Conference on Language Resources and Evaluation (LREC), pp. 1892–1897 (2014)

    Google Scholar 

  36. Voskou, A., Panousis, K.P., Kosmopoulos, D., Metaxas, D.N., Chatzis, S.: Stochastic transformer networks with linear competing units: application to end-to-end SL translation. In: International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 11926–11935 (2021)

    Google Scholar 

  37. Vázquez-Enríquez, M., Alba-Castro, J.L., Docío-Fernández, L., Rodríguez-Banga, E.: Isolated sign language recognition with multi-scale spatial-temporal graph convolutional networks. In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2021)

    Google Scholar 

  38. Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 7444–7452 (2018)

    Google Scholar 

  39. Yang, H.D., Sclaroff, S., Lee, S.W.: Sign language spotting with a threshold model based on conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 31(7), 1264–1277 (2009)

    Article  Google Scholar 

  40. Zhang, C., Wu, J., Li, Y.: Actionformer: localizing moments of actions with transformers. CoRR abs/2202.07925 (2022)

    Google Scholar 

Download references

Acknowledgments

This work has been supported by the Spanish projects PID2019-105093GB-I00 and RTI2018-101372-B-I00, by ICREA under the ICREA Academia programme and by the Xunta de Galicia and ERDF through the Consolidated Strategic Group AtlanTTic (2019–2022). Manuel Vázquez is funded by the Spanish Ministry of Science and Innovation through the predoc grant PRE2019-088146.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manuel Vázquez Enríquez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vázquez Enríquez, M., Castro, J.L.A., Fernandez, L.D., Jacques Junior, J.C.S., Escalera, S. (2023). ECCV 2022 Sign Spotting Challenge: Dataset, Design and Results. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13808. Springer, Cham. https://doi.org/10.1007/978-3-031-25085-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25085-9_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25084-2

  • Online ISBN: 978-3-031-25085-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics