Abstract
Continuous Sign Language Recognition (CSLR) is a challenging task in the field of action recognition. It requires splitting a video into an indefinite number of glosses, which belong to different classes. Nowadays, researchers usually use deep learning methods with end-to-end training. One popular CSLR model paradigm is a three-step network, i.e., using a visual module to extract 2D frame features and short-term sequential features, then using a sequential module to analyze contextual associations, and finally Connectionist Temporal Classification (CTC) loss is used to constrain the output. Gloss alignment ability is found to be an important factor affecting CSLR model performance. However, the three-step CSLR paradigm mainly depends on the sequential module to align gloss, the visual module only focuses on local information and contributes little to module alignment ability, leading to training inconsistent between these two modules. This paper proposes an Attention Auxiliary Supervision (AAS) method to optimize the parameter of visual module and help it pay more attention to global information, thereby improving the alignment ability of the whole model. As an external part of the main model, the proposed AAS method has flexible applicability and is expected to be used in other CSLR models without increasing the cost of inference. The model performs well on two largescale CSLR datasets, i.e., PHOENIX14 (21.1% Test) and PHOENIX14-T (20.9% Test), which demonstrates its competitiveness among state-of-the-art models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Camgoz, N.C., Hadfield, S., Koller, O., Bowden, R.: SubUNets: end-to-end hand shape and continuous sign language recognition. In: ICCV, pp. 3075–3084 (2017)
Cui, R., Liu, H., Zhang, C.: A deep neural framework for continuous sign language recognition by iterative training. In: TMM, pp. 1880–1891 (2019)
Pu, J., Zhou, W., Li, H.: Iterative alignment network for continuous sign language recognition. In: CVPR, pp. 4165–4174 (2019)
Graves, A., Fernández, S., Gomez, F., Schmidhuber. J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 369–376 (2006)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv pre-print arXiv:1409.0473 (2016)
Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)
Lee, C.-Y., Xie, S., Gallagher, P.W., Zhang, Z., Tu, Z.: Deeply-supervised nets. In: Proceedings of the Artificial Intelligence and Statistics (2014)
Wang, L., Lee, C.-Y., Tu, Z., Lazebnik, S.: Training deeper convolutional networks with deep supervision. arXiv preprint (2015)
Szegedy, C., et al.: Going deeper with convolutions. arXiv preprint (2015)
Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., Ma, K.: Be your own teacher: improve the performance of convolutional neural networks via self distillation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3713–3722 (2019)
Min, Y., Hao, A., Chai, X., Chen, X.: Visual alignment constraint for continuous sign language recognition. In: ICCV (2021)
Cui, R., Liu, H., Zhang, C.: A deep neural framework for continuous sign language recognition by iterative training. IEEE Trans. Multimedia 21, 1880–1891 (2019)
Hu, L., Gao, L., Liu, Z., Feng, W.: Temporal lift pooling for continuous sign language recognition. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13695, pp. 511–527. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19833-5_30
Pu, J., Zhou, W., Li, H.: Iterative alignment network for continuous sign language recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4165–4174 (2019)
Mnih, V., Heess, N., Graves, A., Kavukcuoglu, K.: Recurrent models of visual attention. In: NeurIPS (2014)
Luong, M.-T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale learning. In: CVPR (2020)
Liu, Z., et al.: Swin Transformer: hierarchical vision transformer using shifted windows. In: CVPR (2021)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13
Zhou, H., Zhou, W., Zhou, Y., Li, H.: Spatial-temporal multi-cue network for continuous sign language recognition. In: AAAI (2020)
Pu, J., Zhou, W., Hu, H., Li, H.: Boosting continuous sign language recognition via cross modality augmentation. In: ACM MM (2020)
Cheng, K.L., Yang, Z., Chen, Q., Tai, Y.-W.: Fully convolutional networks for continuous sign language recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 697–714. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_41
Niu, Z., Mak, B.: Stochastic fine-grained labeling of multi-state sign glosses for continuous sign language recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 172–186. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_11
Camgoz, N.C., Hadfield, S., Koller, O., Bowden, R.: SubUNets: end-to-end hand shape and continuous sign language recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3075–3084 (2017)
Camgoz, N.C., Hadfield, S., Koller, O., Ney, H., Bowden, R.: Neural sign language translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7784–7793 (2018)
Koller, O., Camgoz, N.C., Ney, H., Bowden, R.: Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallel-ism in sign language videos. In: PAMI (2019)
Hannun, A.: Sequence modeling with CTC. Distill 2(11), e8 (2017)
Acknowledgements
This work is funded by the “Project research on human–robot interactive sampling robots with safety, autonomy, and intelligent operations” supported by National Natural Science Foundation of China (NSFC), Grant/Award Number: 92048205; This work is also funded by China Scholarship Council (CSC), Grant/Award Number: 202008310014.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Qin, X. et al. (2024). Attention Auxiliary Supervision for Continuous Sign Language Recognition. In: Liu, F., Sadanandan, A.A., Pham, D.N., Mursanto, P., Lukose, D. (eds) PRICAI 2023: Trends in Artificial Intelligence. PRICAI 2023. Lecture Notes in Computer Science(), vol 14326. Springer, Singapore. https://doi.org/10.1007/978-981-99-7022-3_2
Download citation
DOI: https://doi.org/10.1007/978-981-99-7022-3_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7021-6
Online ISBN: 978-981-99-7022-3
eBook Packages: Computer ScienceComputer Science (R0)