Skip to main content
Log in

Advances in machine translation for sign language: approaches, limitations, and challenges

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Sign languages are used by the deaf community around the globe to communicate with one another. These are gesture-based languages where a deaf person performs gestures using hands and facial expressions. Every gesture represents a word or a phrase in the natural language. There are more than 200 different sign languages in the world. In order to facilitate the learning of sign languages by the deaf community, researchers have compiled sign language repositories comprising of gestures. Similarly, algorithms have been proposed to translate the natural language into sign language, which is subsequently converted into gestures using avatar technology. On the other hand, several different approaches for gesture recognition have also been proposed in the literature, many of which use specialized hardware. Similarly, cell phone applications have been developed for learning and translation of sign languages. This article presents a systematic literature review of these multidisciplinary aspects of sign language translation. It provides a detailed analysis of carefully selected 147 high-quality research articles and books related to the subject matter. Specifically, it categorizes different approaches used for each component, discusses their theoretical foundations, and provides a comparative analysis of the proposed approaches. Lastly, open research challenges and future directions for each facet of the sign language translation problem have been discussed. To the best of our knowledge, this is the first comprehensive survey on sign language translation that discusses state-of-the-art research from multi-disciplinary perspectives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31

Similar content being viewed by others

Availability of data and material

Not applicable.

Code availability

Not applicable.

References

  1. “Cybertouch—Cyberglove systems LLC” (2020) http://www.cyberglovesystems.com/cybertouch. Accessed 21 April 2020

  2. “Digital assistance for sign language users” (2020) https://www.microsoft.com/en-us/research/blog/digital-assistance-for-sign-language-users/. Accessed 21 April 2020

  3. “ESIGN project” (2020) https://www.sign-lang.uni-hamburg.de/esign/. Accessed 21 April 2020

  4. “National association of the deaf—NAD” (2020) https://www.nad.org/. Accessed 21 April 2020

  5. “Accelerometer sensor—an overview—ScienceDirect topics” (2020) https://www.sciencedirect.com/topics/engineering/accelerometer-sensor. Accessed 21 April 2020

  6. “American Sign Language video dataset” (2020) http://vlm1.uta.edu/~athitsos/asl_lexicon/. Accessed 21 April 2020

  7. “American Sign Language video dataset” (2020) http://csr.bu.edu/asl/asllvd/annotate/index-cvpr4hb08-dataset.html. Accessed 21 April 2020

  8. “Deafness and hearing loss” (2020) https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss. Accessed 21 April 2020

  9. “Directorate general of special education, Islamabad, Pakistan” (2020) http://dgse.gov.pk/html/index.html. Accessed 21 April 2020

  10. “Electromyography (EMG), Johns Hopkins Medicine” (2020) https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/electromyography-emg. Accessed 21 April 2020

  11. “Image data set—automatic sign language detection” (2020) https://sites.google.com/site/autosignlan/source/image-data-set. Accessed 21 April 2020

  12. “Leap motion” (2020) https://www.leapmotion.com/. Accessed 21 April 2020

  13. “National Association of the Deaf, India” (2020) http://nadindia.org/. Accessed 21 April 2020

  14. “Sign language digits dataset” (2020) https://kaggle.com/ardamavi/sign-language-digits-dataset. Accessed 21 April 2020

  15. “ASL: fingerspelling (lifeprint.com)” (2020) https://itunes.apple.com/us/app/asl-fingerspelling-lifeprint-com/id605558017?mt=8. Accessed 21 April 2020

  16. Agarwal A, Thakur MK (2013) Sign language recognition using Microsoft kinect. In: 2013 sixth international conference on contemporary computing (IC3). IEEE, pp 181–185

  17. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MM (2018) A review on systems based sensory gloves for sign language recognition state of the art between 2007 and 2017. Sensors 18(7):2208

    Article  Google Scholar 

  18. Al-khazraji S, Berke L, Kafle S, Yeung P, Huenerfauth M (2018) Modeling the speed and timing of American Sign Language to generate realistic animations. In: Proceedings of the 20th international ACM SIGACCESS conference on computers and accessibility, pp 259–270

  19. Almeida SG, Guimarães FG, Ramírez JA (2014) Feature extraction in Brazilian Sign Language recognition based on phonological structure and using RGB-D sensors. Expert Syst Appl 41(16):7259–7271

    Article  Google Scholar 

  20. AlShammari A, Alsumait A, Faisal M (2018) Building an interactive e-learning tool for deaf children: interaction design process framework. In: 2018 IEEE conference on e-learning, e-management and e-services (IC3e). IEEE, pp 85–90

  21. Ambar R, Fai CK, Abd Wahab MH, Jamil MM, Ma’radzi AA (2018) Development of a wearable device for sign language recognition. In: Journal of physics: conference series, vol 1019. IOP Publishing, p 012017

  22. Andrei S, Osborne L, Smith Z (2013) Designing an American Sign Language avatar for learning computer science concepts for deaf or hard-of-hearing students and deaf interpreters. J Educ Multimedia Hypermedia 22(3):229–242

    Google Scholar 

  23. Antony PJ (2013) Machine translation approaches and survey for Indian Languages. Int J Comput Linguist Chin Lang Process 18(1):47–78

    Google Scholar 

  24. Athira PK, Sruthi CJ, Lijiya A (2019) A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario. J King Saud Univ-Comput Inf Sci. https://doi.org/10.1016/j.jksuci.2019.05.002

    Article  Google Scholar 

  25. Avola D, Bernardi M, Cinque L, Foresti GL, Massaroni C (2018) Exploiting recurrent neural networks and leap motion controller for the recognition of sign language and semaphoric hand gestures. IEEE Trans Multimedia 21(1):234–245

    Article  Google Scholar 

  26. Bangham JA, Cox SJ, Elliott R, Glauert JR, Marshall I, Rankov S, Wells M (2000) Virtual signing: capture, animation, storage and transmission-an overview of the visicast project

  27. Bastos IL, Angelo MF, Loula AC (2015) Recognition of static gestures applied to Brazilian Sign Language (libras). In: 2015 28th SIBGRAPI conference on graphics, patterns and images. IEEE, pp 305–312

  28. Bonham ME (2015) English to ASL gloss machine translation

  29. Bouzid Y, Jemni M (2013) An animated avatar to interpret signwriting transcription. In: 2013 international conference on electrical engineering and software applications. IEEE, pp 1–5

  30. Bouzid Y, Jemni M et al (2016) The effect of avatar technology on sign writing vocabularies acquisition for deaf learners. In: 2016 IEEE 16th international conference on advanced learning technologies (ICALT). IEEE, pp 441–445

  31. Bragg D, Koller O, Bellard M, Berke L, Boudreault P, Braffort A, Caselli N, Huenerfauth M, Kacorri H, Verhoef T et al (2019) Sign language recognition, generation, and translation: an interdisciplinary perspective. In: The 21st international ACM SIGACCESS conference on computers and accessibility, pp 16–31

  32. Brock H, Farag I, Nakadai K (2020) Recognition of non-manual content in continuous Japanese Sign Language. Sensors 20(19):5621

    Article  Google Scholar 

  33. Cate H, Hussain Z (2017) Bidirectional American Sign Language to English translation. arXiv preprint, arXiv:1701.02795

  34. Ceni A, Ashwin P, Livi L (2019) Interpreting recurrent neural networks behaviour via excitable network attractors. Cogn Comput 12:1–27

    Google Scholar 

  35. Chai X, Li G, Lin Y, Xu Z, Tang Y, Chen X, Zhou M (2013) Sign language recognition and translation with kinect. In: IEEE conference on AFGR, vol 655, p 4

  36. Chai X, Wang H, Chen X (2014) The devisign large vocabulary of Chinese Sign Language database and baseline evaluations. Technical report VIPL-TR-14-SLR-001. Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS

  37. Chiriac IA, Stoicu-Tivadar L, Podoleanu E (2015) Comparing video and avatar technology for a health education application for deaf people. In: MIE, pp 516–520

  38. Chiu YH, Wu CH, Su HY, Cheng CJ (2006) Joint optimization of word alignment and epenthesis generation for Chinese to Taiwanese sign synthesis. IEEE Trans Pattern Anal Mach Intell 29(1):28–39

    Article  Google Scholar 

  39. Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint, arXiv:1406.1078.

  40. Chuan CH, Regina E, Guardino C (2014) American Sign Language recognition using leap motion sensor. In: 2014 13th international conference on machine learning and applications. IEEE, pp 541–544

  41. Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint, arXiv:1412.3555

  42. Camgoz NC, Hadfield S, Koller O, Ney H, Bowden R (2018) Neural sign language translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7784–7793

  43. Cormier K, Fenlon J, Johnston T, Rentelis R, Schembri A, Rowley K, Adam R, Woll B (2012) From corpus to lexical database to online dictionary: issues in annotation of the BSL corpus and the development of BSL SignBank. In: Proceedings of the 5th workshop on the representation and processing of sign languages: interactions between corpus and lexicon, international conference on language resources and evaluation, LREC, pp 7–12

  44. Cox S, Lincoln M, Tryggvason J, Nakisa M, Wells M, Tutt M, Abbott S (2002) Tessa, a system to aid communication with deaf people. In: Proceedings of the fifth international ACM conference on assistive technologies, pp 205–212

  45. Cui R, Liu H, Zhang C (2019) A deep neural framework for continuous sign language recognition by iterative training. IEEE Trans Multimedia 21(7):1880–1891

    Article  Google Scholar 

  46. d’Armond LS (2002) Representation of American Sign Language for machine translation. Ph.D. thesis, Georgetown University

  47. Das A, Yadav L, Singhal M, Sachan R, Goyal H, Taparia K, Gulati R, Singh A, Trivedi G (2016) Smart glove for sign language communications. In: 2016 international conference on accessibility to digital world (ICADW). IEEE, pp 27–31

  48. Debevc M, Milošević D, Kožuh I (2015) A comparison of comprehension processes in sign language interpreter videos with or without captions. PLoS ONE 10(5):e0127577

    Article  Google Scholar 

  49. Di Mascio T, Gennari R (2008) An intelligent visual dictionary for Italian Sign Language. J Web Eng 7(4):318–338

    Google Scholar 

  50. Dipietro L, Sabatini AM, Dario P (2008) A survey of glove-based systems and their applications. IEEE Trans Syst Man Cybern Part C (Appl Rev) 38(4):461–482

    Article  Google Scholar 

  51. Ebling S, Glauert J (2016) Building a Swiss German Sign Language avatar with JASigning and evaluating it among the deaf community. Univ Access Inf Soc 15(4):577–587

    Article  Google Scholar 

  52. El-Bendary N, Zawbaa HM, Daoud MS, Hassanien AE, Nakamatsu K (2010) Arslat: Arabic sign language alphabets translator. In: 2010 international conference on computer information systems and industrial management applications (CISIM). IEEE, pp 590–595

  53. Elghoul MJ (2007) An avatar based approach for automatic interpretation of text to sign language. Chall Assist Technol: AAATE 07(20):266

    Google Scholar 

  54. Elliott R, Glauert JR, Kennaway JR, Marshall I (2000) The development of language processing support for the ViSiCAST project. In: Proceedings of the fourth international ACM conference on assistive technologies, pp 101–108

  55. Elliott R, Glauert JR, Kennaway JR, Marshall I, Safar E (2008) Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Univ Access Inf Soc 6(4):375–391

    Article  Google Scholar 

  56. Elons AS, Ahmed M, Shedid H, Tolba MF (2014) Arabic Sign Language recognition using leap motion sensor. In: 2014 9th international conference on computer engineering & systems (ICCES). IEEE, pp 368–373

  57. Eźlakowski W (2020) Grammar of Polish Sign Language as compared to grammar of polish language: selected themes. Sign Lang Stud 20(3):518–532

    Article  Google Scholar 

  58. Fadlilah U, Wismoyohadi D, Mahamad AK, Handaga B (2019) Bisindo information system as potential daily sign language learning. In: AIP conference proceedings, vol 2114. AIP Publishing LLC, p 060021

  59. Farooq U, Asmat A, Rahim MS, Khan NS, Abid A (2019) A comparison of hardware based approaches for sign language gesture recognition systems. In: 2019 international conference on innovative computing (ICIC). IEEE, pp 1–6

  60. Filhol M (2009) Zebedee: a lexical description model for sign language synthesis. Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI). LIMSICNRS, Orsay

  61. Firdaus M, Golchha H, Ekbal A, Bhattacharyya P (2020) A deep multi-task model for dialogue act classification, intent detection and slot filling. Cogn Comput, 1–20

  62. Forster J, Schmidt C, Koller O, Bellgardt M, Ney H (2014) Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-weather. In: LREC, pp 1911–1916

  63. Fuertes JL, González ÁL, Mariscal G, Ruiz C (2006) Bilingual sign language dictionary. In: International conference on computers for handicapped persons. Springer, pp 599–606

  64. Gao F, Huang T, Sun J, Wang J, Hussain A, Yang E (2019) A new algorithm for SAR image target recognition based on an improved deep convolutional neural network. Cogn Comput 11(6):809–824

    Article  Google Scholar 

  65. Garje GV, Kharate GK (2013) Survey of machine translation systems in India. Int J Nat Lang Comput (IJNLC) 2:47–67

    Article  Google Scholar 

  66. Geetha M, Manjusha C, Unnikrishnan P, Harikrishnan R (2013) A vision based dynamic gesture recognition of Indian Sign Language on kinect based depth images. In: 2013 international conference on emerging trends in communication, control, signal processing and computing applications (C2SPCA), pp 1–7. IEEE

  67. Gibet S, Courty N, Duarte K, Naour TL (2011) The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans Interact Intell Syst (TiiS) 1(1):1–23

    Article  Google Scholar 

  68. Goyal L, Goyal V (2016) Text to sign language translation system: a review of literature. Int J Synth Emot (IJSE) 7(2):62–77

    Article  Google Scholar 

  69. Grieve-Smith AB (2001) Signsynth: a sign language synthesis application using web3d and perl. In: International gesture workshop. Springer, pp 134–145

  70. Grimes GJ (1983) Digital data entry glove interface device, 8 Nov 1983. US Patent 4,414,537

  71. Halawani SM, Zaitun AB (2012) An avatar based translation system from Arabic speech to Arabic Sign Language for deaf people. Int J Inf Sci Educ 2(1):13–20

    Google Scholar 

  72. Halim Z, Abbas G (2015) A kinect-based sign language hand gesture recognition system for hearing-and speech-impaired: a pilot study of Pakistani Sign Language. Assist Technol 27(1):34–43

    Article  Google Scholar 

  73. Hanke T, König L, Wagner S, Matthes S (2010) DGS corpus & Dicta-sign: the Hamburg studio setup. 4th workshop on the representation and processing of sign languages: corpora and sign language technologies (CSLT 2010), Valletta, Malta, pp 106–110

  74. Haydar J, Dalal B, Hussainy S, El Khansa L, Fahs W (2012) ASL fingerspelling translator glove. Int J Comput Sci Issues (IJCSI) 9(6):254

    Google Scholar 

  75. Heloir A, Kipp M (2010) Real-time animation of interactive agents: specification and realization. Appl Artif Intell 24(6):510–529

    Article  Google Scholar 

  76. Huang J, Zhou W, Li H, Li W (2018) Attention-based 3D-CNNS for large-vocabulary sign language recognition. IEEE Trans Circuits Syst Video Technol 29(9):2822–2832

    Article  Google Scholar 

  77. Huang J, Zhou W, Zhang Q, Li H, Li W (2018) Video-based sign language recognition without temporal segmentation. In: Thirty second AAAI conference on artificial intelligence

  78. Huenerfauth M, Kacorri H (2014) Release of experimental stimuli and questions for evaluating facial expressions in animations of American Sign Language. In: Proceedings of the 6th workshop on the representation and processing of sign languages: beyond the manual channel, the 9th international conference on language resources and evaluation (LREC 2014), Reykjavik, Iceland

  79. Huenerfauth M, Marcus M, Palmer M (2006) Generating American Sign Language classifier predicates for English-to-ASL machine translation. Ph.D. thesis, University of Pennsylvania.

  80. Huenerfauth MP (2003) American Sign Language natural language generation and machine translation systems. Technical report. Computer and Information Sciences, University of Pennsylvania

  81. Hutchinson J (2012) Literature review: analysis of sign language notations for parsing in machine translation of SASL. Ph.D. thesis, Rhodes University, South Africa

  82. Ibrahim NB, Selim MM, Zayed HH (2018) An automatic Arabic Sign Language Recognition System (ArSLRS). J King Saud Univ Comput Inf Sci 30(4):470–477

    Google Scholar 

  83. Jalal MA, Chen R, Moore RK, Mihaylova L (2018) American Sign Language posture understanding with deep neural networks. In: 2018 21st international conference on information fusion (FUSION). IEEE, pp 573–579

  84. Jin CM, Omar Z, Jaward MH (2016) A mobile application of American Sign Language translation via image processing algorithms. In: 2016 IEEE Region 10 Symposium (TENSYMP). IEEE, pp 104–109

  85. Jingqiu W, Ting Z (2014) An arm-based embedded gesture recognition system using a data glove. In: The 26th Chinese control and decision conference (2014 CCDC). IEEE, pp 1580–1584

  86. Joze HR, Koller O (2018) Ms-asl: a large-scale data set and benchmark for understanding American Sign Language. arXiv preprint, arXiv:1812.01053

  87. Kacorri H, Huenerfauth M (2016) Continuous profile models in ASL syntactic facial expression synthesis. In: Proceedings of the 54th annual meeting of the association for computational linguistics (volume 1: long papers), pp 2084–2093

  88. Kacorri H, Huenerfauth M, Ebling S, Patel K, Menzies K, Willard M (2017) Regression analysis of demographic and technology-experience factors influencing acceptance of sign language animation. ACM Trans Access Comput (TACCESS) 10(1):1–33

    Article  Google Scholar 

  89. Kanwal K, Abdullah S, Ahmed YB, Saher Y, Jafri AR (2014) Assistive glove for Pakistani Sign Language translation. In: 17th IEEE international multi topic conference 2014. IEEE, pp 173–176

  90. Karbasi M, Zabidi A, Yassin IM, Waqas A, Bhatti Z (2017) Malaysian Sign Language dataset for automatic sign language recognition system. J Fundam Appl Sci 9(4S):459–474

    Article  Google Scholar 

  91. Kennaway JR, Glauert JR, Zwitserlood I (2007) Providing signed content on the internet by synthesized animation. ACM Trans Comput-Hum Interact (TOCHI) 14(3):15-es

    Article  Google Scholar 

  92. Khan NS, Abid A, Abid K (2020) A novel natural language processing (NLP)–based machine translation model for English to Pakistan Sign Language translation. Cogn Comput 12:748–765

    Article  Google Scholar 

  93. Kim J, Wagner J, Rehm M, André E (2008) Bichannel sensor fusion for automatic sign language recognition. In: 2008 8th IEEE international conference on automatic face & gesture recognition. IEEE, pp 1–6

  94. Kim T, Keane J, Wang W, Tang H, Riggle J, Shakhnarovich G, Brentari D, Livescu K (2017) Lexicon-free fingerspelling recognition from video: Data, models, and signer adaptation. Comput Speech Lang 46:209–232

    Article  Google Scholar 

  95. Kipp M, Heloir A, Nguyen Q (2011) Sign language avatars: animation and comprehensibility. In: International workshop on intelligent virtual agents. Springer, pp 113–126

  96. Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering

  97. Koller O, Camgoz NC, Ney H, Bowden R (2019) Weakly supervised learning with multi-stream CNN-LSTM-HMMS to discover sequential parallelism in sign language videos. IEEE Trans Pattern Anal Mach Intell 42(9):2306–2320

    Article  Google Scholar 

  98. Koller O, Zargaran S, Ney H (2017) Re-sign: re-aligned end-to-end sequence modelling with deep recurrent CNN-HMMS. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4297–4305

  99. Kouremenos D, Ntalianis K, Kollias S (2018) A novel rule based machine translation scheme from Greek to Greek Sign Language: production of different types of large corpora and language models evaluation. Comput Speech Lang 51:110–135

    Article  Google Scholar 

  100. Kristoffersen JH, Troelsgård T (2012) The electronic lexicographical treatment of sign languages: the Danish Sign Language Dictionary. Oxford University Press, Oxford

    Google Scholar 

  101. Krňoul Z, Kanis J, Železný M, Müller L (2007) Czech text-to-sign speech synthesizer. In: International workshop on machine learning for multimodal interaction. Springer, pp 180–191

  102. Kumar P, Gauba H, Roy PP, Dogra DP (2017) Coupled HMM-based multi-sensor data fusion for sign language recognition. Pattern Recogn Lett 86:1–8

    Article  Google Scholar 

  103. Lang S, Block M, Rojas R (2012) Sign language recognition using kinect. In: International conference on artificial intelligence and soft computing. Springer, pp 394–402

  104. Lee BG, Lee SM (2017) Smart wearable hand device for sign language interpretation system with sensors fusion. IEEE Sens J 18(3):1224–1232

    Article  Google Scholar 

  105. Li Y, Chen X, Zhang X, Wang K, Wang ZJ (2012) A sign-component-based framework for Chinese Sign Language recognition using accelerometer and SEMG data. IEEE Trans Biomed Eng 59(10):2695–2704

    Article  Google Scholar 

  106. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 62(10):e1–e34

    Article  Google Scholar 

  107. Liu Y, Vong CM, Wong PK (2017) Extreme learning machine for huge hypotheses re-ranking in statistical machine translation. Cogn Comput 9(2):285–294

    Article  Google Scholar 

  108. Liwicki S, Everingham M (2009) Automatic recognition of finger spelled words in British Sign Language. In: 2009 IEEE computer society conference on computer vision and pattern recognition workshops. IEEE, pp 50–57

  109. Lokhande P, Prajapati R, Pansare S (2015) Data gloves for sign language recognition system. Int J Comput Appl 975:8887

    Google Scholar 

  110. Lu D, Yu Y, Liu H (2016) Gesture recognition using data glove: An extreme learning machine method. In: 2016 IEEE international conference on robotics and biomimetics (ROBIO). IEEE, pp 1349–1354

  111. Lu P, Huenerfauth M (200) Collecting a motion-capture corpus of American Sign Language for data-driven generation research. In: Proceedings of the NAACL HLT 2010 workshop on speech and language processing for assistive technologies. Association for Computational Linguistics, pp 89–97

  112. Lun R, Zhao W (2015) A survey of applications and human motion recognition with microsoft kinect. Int J Pattern Recognit Artif Intell 29(05):1555008

    Article  Google Scholar 

  113. Luqman H, Mahmoud SA (2019) Automatic translation of Arabic Text-to-Arabic Sign Language. Univ Access Inf Soc 18(4):939–951

    Article  Google Scholar 

  114. Luqman H, Mahmoud SA et al (2017) Transform-based Arabic Sign Language recognition. Procedia Comput Sci 117:2–9

    Article  Google Scholar 

  115. Ma J, Gao W, Wu J, Wang C (2000) A continuous Chinese Sign Language recognition system. In: Proceedings fourth IEEE international conference on automatic face and gesture recognition (Cat. No. PR00580). IEEE, pp 428–433

  116. Marschark M, Sapere P, Convertino C, Seewagen R, Maltzen H (2004) Comprehension of sign language interpreting: deciphering a complex task situation. Sign Lang Stud 4(4):345–368

    Article  Google Scholar 

  117. Martin PM, Belhe S, Mudliar S, Kulkarni M, Sahasrabudhe S (2013) An Indian Sign Language (ISL) corpus of the domain disaster message using avatar. In: Proceedings of the third international symposium in sign language translations and technology (SLTAT-2013), pp 1–4

  118. Martínez AM, Wilbur RB, Shay R, Kak AC (2002) Purdue RVL-SLLL ASL database for automatic recognition of American Sign Language. In: Proceedings. Fourth IEEE international conference on multimodal interfaces. IEEE, pp 167–172

  119. Mehdi SA, Khan YN (2002) Sign language recognition using sensor gloves. In: Proceedings of the 9th international conference on neural information processing, 2002. ICONIP’02, vol 5. IEEE, pp 2204–2206

  120. Mittal A, Kumar P, Roy PP, Balasubramanian R, Chaudhuri BB (2019) A modified LSTM model for continuous sign language recognition using leap motion. IEEE Sens J 19(16):7056–7063

    Article  Google Scholar 

  121. Mocialov B, Hastie H, Turner G (2018) Transfer learning for British Sign Language modelling. In: Proceedings of the fifth workshop on NLP for similar languages, varieties and dialects (VarDial 2018), pp 101–110

  122. Mohandes MA (2013) Recognition of two-handed Arabic signs using the cyberglove. Arab J Sci Eng 38(3):669–677

    Article  Google Scholar 

  123. Nguyen TBD, Phung TN, Vu TT (2018) A rule-based method for text shortening in Vietnamese Sign Language translation. In: Bhateja V, Nguyen B, Nguyen N, Satapathy S, Le DN (eds) Information systems design and intelligent applications. Springer, Singapore, pp 655–662

    Chapter  Google Scholar 

  124. Niewiadomski R, Bevacqua E, Mancini M, Pelachaud C (2009) Greta: an interactive expressive ECA system. In: Proceedings of the 8th international conference on autonomous agents and multiagent systems, vol 2, pp 1399–1400

  125. Nisar S, Tariq M, Adeel A, Gogate M, Hussain A (2019) Cognitively inspired feature extraction and speech recognition for automated hearing loss testing. Cogn Comput 11(4):489–502

    Article  Google Scholar 

  126. Othman A, Jemni M (2011) Statistical sign language machine translation: from English written text to American Sign Language gloss. arXiv preprint, arXiv:1112.0168

  127. Othman A, Jemni M (2019) Designing high accuracy statistical machine translation for sign language using parallel corpus: case study English and American Sign Language. J Inf Technol Res (JITR) 12(2):134–158

    Article  Google Scholar 

  128. Otter DW, Medina JR, Kalita JK (2018) A survey of the usages of deep learning in natural language processing. arXiv preprint, arXiv:1807.10854.

  129. Oz C, Leu MC (2005) Recognition of finger spelling of American Sign Language with artificial neural network using position/orientation sensors and data glove. In: International symposium on neural networks. Springer, pp 157–164

  130. Ozcan T, Basturk A (2019) Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition. Neural Comput Appl 31(12):8955–8970

    Article  Google Scholar 

  131. Parvini F, McLeod D, Shahabi C, Navai B, Zali B, Ghandeharizadeh S (2009) An approach to glove-based gesture recognition. In: International conference on human-computer interaction. Springer, pp 236–245

  132. Pizzolato EB, dos Santos Anjo M, Pedroso GC (2010) Automatic recognition of finger spelling for libras based on a two-layer architecture. In: Proceedings of the 2010 ACM symposium on applied computing, pp 969–973

  133. Platts JT (1884) A dictionary of Urdu, classical Hindi, and English. H. Milford, Oxford

    Google Scholar 

  134. Poria S, Soon OY, Liu B, Bing L (2020) Affect recognition for multimodal natural language processing. Cogn Comput 13:1–2

    Google Scholar 

  135. Porta J, López-Colino F, Tejedor J, Colás J (2014) A rule-based translation from written Spanish to Spanish Sign Language glosses. Comput Speech Lang 28(3):788–811

    Article  Google Scholar 

  136. Potter LE, Araullo J, Carter L (2013) The leap motion controller: a view on sign language. In: Proceedings of the 25th Australian computer–human interaction conference: augmentation, application, innovation, collaboration, pp 175–178

  137. Praveen N, Karanth N, Megha MS (2014) Sign language interpreter using a smart glove. In: 2014 international conference on advances in electronics computers and communications. IEEE, pp 1–5

  138. Prillwitz S (1990) The long road towards bilingualism of the deaf in the German-speaking area. Sign Lang Res Appl: Int Stud Sign Lang Commun Deaf 13:13–27

    Google Scholar 

  139. Pu J, Zhou W, Li H (2019) Iterative alignment network for continuous sign language recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4165–4174

  140. Pugeault N, Bowden R (2011) Spelling it out: real-time ASL fingerspelling recognition. In: 2011 IEEE international conference on computer vision workshops (ICCV workshops). IEEE, pp 1114–1119

  141. Rabby G, Azad S, Mahmud M, Zamli KZ, Rahman MM (2020) TeKET: a tree-based unsupervised keyphrase extraction technique. Cogn Comput 12:1–23

    Article  Google Scholar 

  142. Rabiner L, Juang B (1986) An introduction to hidden Markov models. IEEE ASSP Mag 3(1):4–16

    Article  Google Scholar 

  143. Rad AE, Rahim MS, Kolivand H, Amin IB (2017) Morphological region-based initial contour algorithm for level set methods in image segmentation. Multimedia Tools Appl 76(2):2185–2201

    Article  Google Scholar 

  144. Rakhmadi A, Othman NZ, Bade A, Rahim MS, Amin IM (2010) Connected component labeling using components neighbors scan labeling approach. J Comput Sci 6(10):1099

    Article  Google Scholar 

  145. Rao GA, Kishore PV (2018) Selfie video based continuous Indian Sign Language recognition system. Ain Shams Eng J 9(4):1929–1939

    Article  Google Scholar 

  146. Razmjooy N, Estrela VV, Loschi HJ (2019) A study on metaheuristic based neural networks for image segmentation purposes. In: Memon QA, Khoja SA (eds) Data science theory, analysis and applications. Taylor and Francis, Abingdon. https://doi.org/10.1201/9780429263798-2

    Chapter  Google Scholar 

  147. Russell M, Kavanaugh M, Master J, Higgins J, Hoffmann T (2009) Computer-based signing accommodations: Comparing a recorded human with an avatar. J Appl Test Technol 10(3):1–21

    Google Scholar 

  148. Saldaña González G, Cerezo Sánchez J, Bustillo Díaz MM, Ata PA (2018) Recognition and classification of sign language for Spanish. Computación y Sistemas 22(1):271–277

    Article  Google Scholar 

  149. San-Segundo R, Montero JM, Córdoba R, Sama V, Fernández F, D’haro LF, López-Ludena V, Sánchez D, García A (2012) Design, development and field evaluation of a Spanish into sign language translation system. Pattern Anal Appl 15(2):203–224

    Article  MathSciNet  Google Scholar 

  150. Scassellati B, Brawer J, Tsui K, Nasihati Gilani S, Malzkuhn M, Manini B, Stone A, Kartheiser G, Merla A, Shapiro A et al (2018) Teaching language to deaf infants with a robot and a virtual human. In: Proceedings of the 2018 CHI conference on human factors in computing systems, pp 1–13

  151. Segouat J, Braffort A (2009) Toward the study of sign language coarticulation: methodology proposal. In: 2009 second international conferences on advances in computer–human interactions. IEEE, pp 369–374

  152. Shohieb SM (2019) A gamified e-learning framework for teaching mathematics to Arab deaf students: (supporting an acting Arabic Sign Language avatar). Ubiquitous Learn: Int J 12(1):55–70

    Article  Google Scholar 

  153. Shukor AZ, Miskon MF, Jamaluddin MH, bin Ali F, Asyraf MF, bin Bahar MB et al (2015) A new data glove approach for Malaysian Sign Language detection. Procedia Comput Sci 76:60–67

    Article  Google Scholar 

  154. Singha J, Das K (2015) Automatic Indian Sign Language recognition for continuous video sequence. ADBU J Eng Technol 2(1):0021105

    Google Scholar 

  155. Smith R, Nolan B (2013) Manual evaluation of synthesised sign language avatars. In: Proceedings of the 15th international ACM SIGACCESS conference on computers and accessibility, pp 1–2

  156. Solanki Krunal M (2013) Indian Sign Languages using flex sensor glove. Int J Eng Trends Technol 4(6):2478–2480

    Google Scholar 

  157. Sreelekha S (2017) Statistical vs rule based machine translation; a case study on Indian language perspective. arXiv preprint, arXiv:1708.04559

  158. Starner T, Pentland A (1997) Real-time American Sign Language recognition from video using hidden Markov models. In: Motion-based recognition. Springer, pp 227–243

  159. Starner T, Weaver J, Pentland A (1998) Real-time American Sign Language recognition using desk and wearable computer based video. IEEE Trans Pattern Anal Mach Intell 20(12):1371–1375

    Article  Google Scholar 

  160. Stokoe WC Jr (2005) Sign language structure: an outline of the visual communication systems of the American deaf. J Deaf Stud Deaf Educ 10(1):3–37

    Article  Google Scholar 

  161. Stoll S, Camgoz NC, Hadfield S, Bowden R (2020) Text2sign: towards sign language production using neural machine translation and generative adversarial networks. Int J Comput Vis 128(4):891–908

    Article  Google Scholar 

  162. Sun C, Zhang T, Bao BK, Xu C, Mei T (2013) Discriminative exemplar coding for sign language recognition with kinect. IEEE Trans Cybern 43(5):1418–1428

    Article  Google Scholar 

  163. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems, pp 3104–3112

  164. Suzuki E, Suzuki T, Kakihana K (2006) On the web trilingual sign language dictionary to learn the foreign sign language without learning a target spoken language. In: LREC, pp 2307–2310

  165. Swathi S, Jayashree LS (2019) Machine translation using deep learning: A comparison. In: International conference on artificial intelligence, smart grid and smart city applications, pp 389–395. Springer

  166. Swee TT, Ariff AK, Salleh SH, Seng SK, Huat LS (2007) Wireless data gloves Malay Sign Language recognition system. In: 2007 6th international conference on information, communications & signal processing. IEEE, pp 1–4

  167. Taljard E, Prinsloo D (2019) African language dictionaries for children—a neglected genre. Lexikos 29(1):199–223

    Article  Google Scholar 

  168. Tao W, Lai ZH, Leu MC, Yin Z (2018) American Sign Language alphabet recognition using leap motion controller. In: Proceedings of the 2018 institute of industrial and systems engineers annual conference (IISE 2018)

  169. Targon V (2016) Learning the semantics of notational systems with a semiotic cognitive automaton. Cogn Comput 8(4):555–576

    Article  Google Scholar 

  170. Tripathi K, Nandi NB (2015) Continuous Indian Sign Language gesture recognition and sentence formation. Procedia Computer Science 54:523–531

    Article  Google Scholar 

  171. Tumsri J, Kimpan W (2017) Thai Sign Language translation using leap motion controller. In: Proceedings of the international multiconference of engineers and computer scientists, pp 46–51

  172. Veale T, Conway A, Collins B (1998) The challenges of cross-modal translation: English-to-sign-language translation in the Zardoz system. Mach Transl 13(1):81–106

    Article  Google Scholar 

  173. Verma VK, Srivastava S (2018) Toward machine translation linguistic issues of Indian Sign Language. In: Speech and language processing for human–machine communications. Springer, pp 129–135

  174. Villa-Angulo R, Hidalgo-Silva H (2005) A wearable neural interface for real time translation of Spanish deaf sign language to voice and writing. J Appl Res Technol 3(3):169–186

    Article  Google Scholar 

  175. Vinayagamoorthy V, Glancy M, Ziegler C, Schäffer R (2019) Personalising the TV experience using augmented reality: An exploratory study on delivering synchronised sign language interpretation. In: Proceedings of the 2019 CHI conference on human factors in computing systems, pp 1–12

  176. Von Agris U, Kraiss KF (2007) Towards a video corpus for signer-independent continuous sign language recognition. In: Gesture in human–computer interaction and simulation, Lisbon, Portugal, May 2007

  177. Wadhawan A, Kumar P (2020) Deep learning-based sign language recognition system for static signs. Neural Comput Appl 32:1–12

    Article  Google Scholar 

  178. Wang P, Song Q, Han H, Cheng J (2016) Sequentially supervised long short-term memory for gesture recognition. Cogn Comput 8(5):982–991

    Article  Google Scholar 

  179. Wehrmeyer E (2019) A corpus for signed language interpreting research. Interpreting 21(1):62–90

    Article  Google Scholar 

  180. Wehrmeyer E (2019) Rethinking handshape: a new notation system for sign language. Sign Lang Linguist 22(1):83–111

    Article  Google Scholar 

  181. Wolfe R, McDonald J, Davidson MJ, Frank C (2007) Using an animation-based technology to support reading curricula for deaf elementary schoolchildren. In: The 22nd annual international technology & persons with disabilities conference, pp 1–8

  182. Woll B, Sutton-Spence R, Elton F (2001) Multilingualism: the global approach to sign languages. Sociolinguist Sign Lang 8:32

    Google Scholar 

  183. Wu J, Tian Z, Sun L, Estevez L, Jafari R (2015) Real-time American Sign Language recognition using wrist-worn motion and surface EMG sensors. In: 2015 IEEE 12th international conference on wearable and implantable body sensor networks (BSN). IEEE, pp 1–6

  184. Wu Y, Schuster M, Chen Z, Le QV, Norouzi M, Macherey W, Krikun M, Cao Y, Gao Q, Macherey K, et al (2016) Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint, arXiv:1609.08144

  185. Xu C, Wang G (2019) Bidirectional cognitive computing model for uncertain concepts. Cogn Comput 11(5):613–629

    Article  Google Scholar 

  186. Yang H-D (2015) Sign language recognition with the kinect sensor based on conditional random fields. Sensors 15(1):135–147

    Article  Google Scholar 

  187. Young T, Hazarika D, Poria S, Cambria E (2018) Recent trends in deep learning based natural language processing. IEEE Comput Intell Mag 13(3):55–75

    Article  Google Scholar 

  188. Young T, Hazarika D, Poria S, Cambria E (2011) American Sign Language recognition with the kinect. In: Proceedings of the 13th international conference on multimodal interfaces, pp 279–286

  189. Zahedi M, Dreuw P, Rybach D, Deselaers T, Ney H (2006) Continuous sign language recognition-approaches from speech recognition and available data resources. In: Second workshop on the representation and processing of sign languages: lexicographic matters and didactic scenarios, pp 21–24

  190. Zhang T, Yang Y, Zeng Y, Zhao Y (2020) Cognitive template clustering improved LineMod for efficient multi-object pose estimation. Cogn Comput 12:834–843

    Article  Google Scholar 

  191. Zhang X, Chen X, Li Y, Lantz V, Wang K, Yang J (2011) A framework for hand gesture recognition based on accelerometer and EMG sensors. IEEE Trans Syst Man Cybernet-Part A: Syst Hum 41(6):1064–1076

    Article  Google Scholar 

  192. Zhao L, Kipper K, Schuler W, Vogler C, Badler N, Palmer M (2000) A machine translation system from English to American Sign Language. In: Conference of the Association for Machine Translation in the Americas. Springer, pp 54–67

  193. Zhong G, Jiao W, Gao W, Huang K (2020) Automatic design of deep networks with neural blocks. Cogn Comput 12(1):1–12

    Article  Google Scholar 

  194. Zhu G, Zhang L, Shen P, Song J (2017) Multimodal gesture recognition using 3-D convolution and convolutional LSTM. IEEE Access 5:4517–4524

    Article  Google Scholar 

Download references

Acknowledgements

This study is part of Ms Uzma Farooq's doctoral thesis work at Faculty of Computing, Universiti Teknologi Malaysia, Malaysia. Prof. Hussain would like to acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC) - Grants Ref. EP/M026981/1, EP/T021063/1, EP/T024917/1. The authors also wish to thank the anonymous reviewers for their insightful comments and suggestions which helped improve the quality of our paper. Part of this work has been presented in a conference paper available at https://ieeexplore.ieee.org/document/8966714.

Funding

There is no funding available for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adnan Abid.

Ethics declarations

Conflicts of interest

The authors do not have any conflict of interest or any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Farooq, U., Rahim, M.S.M., Sabir, N. et al. Advances in machine translation for sign language: approaches, limitations, and challenges. Neural Comput & Applic 33, 14357–14399 (2021). https://doi.org/10.1007/s00521-021-06079-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06079-3

Keywords

Navigation