Abstract
The ECCV 2022 Sign Spotting Challenge focused on the problem of fine-grain sign spotting for continuous sign language recognition. We have released and made publicly available a new dataset of Spanish sign language of around 10 h of video data in the health domain performed by 7 deaf people and 3 interpreters. The added value of this dataset over existing ones is the frame-level precise annotation of 100 signs with their corresponding glosses and variants made by sign language experts. This paper summarizes the design and results of the challenge, which attracted 79 participants, contextualizing the problem and defining the dataset, protocols and baseline models, as well as discussing top-winning solutions and future directions on the topic.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
Codalab - https://codalab.lisn.upsaclay.fr.
References
Albanie, Samuel, et al.: BSL-1K: scaling up co-articulated sign language recognition using mouthing cues. In: Vedaldi, Andrea, Bischof, Horst, Brox, Thomas, Frahm, Jan-Michael. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 35–53. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_3
Alon, J., Athitsos, V., Yuan, Q., Sclaroff, S.: A unified framework for gesture recognition and spatiotemporal gesture segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1685–1699 (2009)
Cai, Y., et al.: Exploiting spatial-temporal relationships for 3d pose estimation via graph convolutional networks. In: International Conference on Computer Vision (ICCV), pp. 2272–2281 (2019)
Camgoz, N.C., Hadfield, S., Koller, O., Ney, H., Bowden, R.: Neural sign language translation. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Camgoz, N.C., Koller, O., Hadfield, S., Bowden, R.: Sign language transformers: joint end-to-end sign language recognition and translation. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10023–10033 (2020)
Cao, Z., Hidalgo, G., Simon, T., Wei, S., Sheikh, Y.: Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43(01), 172–186 (2021)
Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Chen, K., et al.: Mmdetection: Open mmlab detection toolbox and benchmark. CoRR abs/1906.07155 (2019)
Cooper, H., Holt, B., Bowden, R.: Sign language recognition. In: Moeslund, T., Hilton, A., Krüger, V., Sigal, L. (eds.) Visual Analysis of Humans, Springer, London, pp. 539–562 (2011). https://doi.org/10.1007/978-0-85729-997-0_27
Cui, R., Liu, H., Zhang, C.: Recurrent convolutional neural networks for continuous sign language recognition by staged optimization. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1610–1618 (2017)
Fenlon, J.B., et al.: Bsl signbank: a lexical database and dictionary of British sign language 1st edn (2014)
Forster, J., Schmidt, C., Koller, O., Bellgardt, M., Ney, H.: Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-weather. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), pp. 1911–1916 (2014)
Grishchenko, I., Bazarevsky, V.: Mediapipe holistic - simultaneous face, hand and pose prediction, on device. https://ai.googleblog.com/2020/12/mediapipe-holisticsimultaneous-face.html (2022). Accessed 18 Jul 2022
Hu, H., Zhao, W., Zhou, W., Wang, Y., Li, H.: SignBERT: pre-training of hand-model-aware representation for sign language recognition. In: International Conference on Computer Vision (ICCV), pp. 11087–11096 (2021)
Jiang, S., Sun, B., Wang, L., Bai, Y., Li, K., Fu, Y.: Skeleton aware multi-modal sign language recognition. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3408–3418 (2021)
Jiang, T., Camgoz, N.C., Bowden, R.: Looking for the signs: identifying isolated sign instances in continuous video footage. In: IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1–8 (2021)
Joze, H.R.V., Koller, O.: MS-ASL: a large-scale data set and benchmark for understanding American sign language. CoRR abs/1812.01053 (2018)
Koller, O., Forster, J., Ney, H.: Continuous sign language recognition: towards large vocabulary statistical recognition systems handling multiple signers. Comput. Vis. Image Understand. 141, 108–125 (2015)
Li, D., Rodriguez, C., Yu, X., Li, H.: Word-level deep sign language recognition from video: a new large-scale dataset and methods comparison. In: Winter Conference on Applications of Computer Vision (WACV) (2020)
Li, D., Yu, X., Xu, C., Petersson, L., Li, H.: Transferring cross-domain knowledge for video sign language recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6204–6213 (2020)
Liu, Z., Zhang, H., Chen, Z., Wang, Z., Ouyang, W.: Disentangling and unifying graph convolutions for skeleton-based action recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 140–149 (2020)
Miech, A., Alayrac, J.B., Smaira, L., Laptev, I., Sivic, J., Zisserman, A.: End-to-End Learning of Visual Representations from Uncurated Instructional Videos. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Momeni, L., Varol, G., Albanie, S., Afouras, T., Zisserman, A.: Watch, read and lookup: learning to spot signs from multiple supervisors. In: ACCV (2020)
Ong, E.J., Koller, O., Pugeault, N., Bowden, R.: Sign spotting using hierarchical sequential patterns with temporal intervals. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1931–1938 (2014)
Pfister, T., Charles, J., Zisserman, A.: Domain-adaptive discriminative one-shot learning of gestures. In: European Conference on Computer Vision (ECCV), vol. 8694, pp. 814–829 (2014)
Prillwitz, S.: HamNoSys Version 2.0. Hamburg Notation System for Sign Languages: An Introductory Guide. Intern. Arb. z. Gebärdensprache u. Kommunik, Signum Press, Dresden (1989)
Rastgoo, R., Kiani, K., Escalera, S.: Sign Language recognition: a deep Survey. Expert Syst. Appl. 164, 113794 (2021)
Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 91–99. Curran Associates, Inc. (2015)
Rong, Y., Shiratori, T., Joo, H.: Frankmocap: A monocular 3d whole-body pose estimation system via regression and integration. In: International Conference on Computer Vision Workshops (ICCVW), pp. 1749–1759 (2021)
Schembri, A.C., Fenlon, J.B., Rentelis, R., Reynolds, S., Cormier, K.: Building the British sign language corpus. Lang. Documentation Conserv. 7, 136–154 (2013)
Sincan, O.M., Keles, H.Y.: AUTSL: a large scale multi-modal Turkish sign language dataset and baseline methods. IEEE Access 8, 181340–181355 (2020)
Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5686–5696 (2019)
Sánchez Pérez, J., Meinhardt-Llopis, E., Facciolo, G.: TV-L1 optical flow estimation. Image Process. Line 3, 137–150 (2013)
Varol, G., Momeni, L., Albanie, S., Afouras, T., Zisserman, A.: Scaling up sign spotting through sign language dictionaries. Int. J. Comput. Vis. 1–24 (2022). https://doi.org/10.1007/s11263-022-01589-6
Viitaniemi, V., Jantunen, T., Savolainen, L., Karppa, M., Laaksonen, J.: S-pot - a benchmark in spotting signs within continuous signing. In: International Conference on Language Resources and Evaluation (LREC), pp. 1892–1897 (2014)
Voskou, A., Panousis, K.P., Kosmopoulos, D., Metaxas, D.N., Chatzis, S.: Stochastic transformer networks with linear competing units: application to end-to-end SL translation. In: International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 11926–11935 (2021)
Vázquez-Enríquez, M., Alba-Castro, J.L., Docío-Fernández, L., Rodríguez-Banga, E.: Isolated sign language recognition with multi-scale spatial-temporal graph convolutional networks. In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2021)
Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 7444–7452 (2018)
Yang, H.D., Sclaroff, S., Lee, S.W.: Sign language spotting with a threshold model based on conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 31(7), 1264–1277 (2009)
Zhang, C., Wu, J., Li, Y.: Actionformer: localizing moments of actions with transformers. CoRR abs/2202.07925 (2022)
Acknowledgments
This work has been supported by the Spanish projects PID2019-105093GB-I00 and RTI2018-101372-B-I00, by ICREA under the ICREA Academia programme and by the Xunta de Galicia and ERDF through the Consolidated Strategic Group AtlanTTic (2019–2022). Manuel Vázquez is funded by the Spanish Ministry of Science and Innovation through the predoc grant PRE2019-088146.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Vázquez Enríquez, M., Castro, J.L.A., Fernandez, L.D., Jacques Junior, J.C.S., Escalera, S. (2023). ECCV 2022 Sign Spotting Challenge: Dataset, Design and Results. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13808. Springer, Cham. https://doi.org/10.1007/978-3-031-25085-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-25085-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25084-2
Online ISBN: 978-3-031-25085-9
eBook Packages: Computer ScienceComputer Science (R0)