Skip to main content
Log in

Towards an open platform for machine translation of spoken languages into sign languages

  • Published:
Machine Translation

Abstract

The purpose of this paper is to investigate the feasibility of offering a multilingual platform for text-to-sign translation, i.e., a solution where a machine translates digital contents in several spoken languages to several sign languages in scenarios such as Digital TV, Web and Cinema. This solution—called OpenSigns—is an open platform that has several common components for generic functionalities originating from the Suíte VLibras, including the creation and manipulation of 3D animation models, and interchangeable mechanisms specific for each sign language, such as a text-to-gloss machine translation engine and a signs dictionary for each sign language. Our motivation is that the concentration of efforts and resources around a single solution could provide some state-of-the-art improvement, such as a standard solution for the industry and a greater functional flexibility for common components. In addition, we could share techniques and heuristics between the translation mechanisms, reducing the effort to make a new sign language available on the platform, which may further enhance digital inclusion and accessibility, especially for the poorest countries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. Censo demográfico brasileiro do IBGE 2010 (IBGE Brazilian Census of 2010). Brazilian Institute of of Geography and Statistics. http://goo.gl/e5t6fS. Accessed 01 Dec 2017.

  2. http://goo.gl/EbapKu. Accessed 30 Nov 2016.

  3. In this paper, we use the term “text-to-sign” to represent the translation of texts from spoken languages into sign languages.

  4. The Suite VLIBRAS is the result of a partnership between the Brazilian Ministry of Planning, Development and Management (MP), through the Information Technology Secretariat (STI) and the Federal University of Paraíba (UFPB), and consists of a set of tools (text, audio and video) for the Brazilian Sign Language (Libras), making computers, mobile devices and Web platforms accessible to the Deaf. Currently, VLibras is used in several governmental and private sites, among them the main sites of the Brazilian government (https://brasil.gov.br), Chamber of Deputies (https://camara.leg.br) and the Federal Senate (https://senado.leg.br). Further information can be obtained from https://www.vlibras.gov.br.

  5. In this paper, we use the term “text-to-text” to represent the translation of texts between spoken or written languages.

  6. This is a commonly studied problem in MT, cf. Liu et al. (2018) for a recent overview of available techniques.

  7. http://www.signslator.com. Accessed 30 Nov 2016.

  8. http://www.handtalk.me. Accessed 30 Nov 2016.

  9. http://prodeaf.net. Accessed 30 Nov 2016.

  10. http://portal.rybena.com.br/site-rybena. Accessed 30 Nov 2016.

  11. Except for the VLibras-Desktop, which operates autonomously and offline, with an embedded MT system and a copy of the Signs Dictionary.

  12. We use the term “text-to-gloss” to represent the translation of texts in spoken languages into a textual representation in sign language, called “gloss”.

  13. Fingerspelling (or dactylology) is the communication in sign language of a word or other expression by rendering its written form letter by letter in a manual alphabet (definition extracted from http://www.dictionary.com).

  14. http://unity3d.com.

  15. http://www.blender.org.

  16. This API can identify the input language of a sentence and translate it automatically into a target spoken language (https://cloud.google.com/translate).

  17. https://wordnet.princeton.edu/.

  18. https://github.com/UniversalDependencies/UD_Portuguese-Bosque.

  19. In this test, the authors randomly selected 69 sentences and two SLs interpreters generated a sequence of glosses in Libras for them. Then, the VLibras system was used to automatically generate a sequence of glosses for these same sentences and the scores of the WER (Niessen et al. 2000) and BLEU (Papineni et al. 2002) metrics were calculated for the two scenarios.

  20. It is important to point out that all computational tests were conducted after the Google Cloud Translation API shifted from Statistical to Neural MT. Thereafter, a Neural MT system was used in the text-to-text translation module.

  21. https://www.manythings.org/anki.

  22. The Tatoeba project database (https://tatoeba.org) is powered by a community of volunteers, and only sentences created by native language speakers are included in the corpus to improve the quality of the translations.

  23. https://goo.gl/wccRvM.

  24. https://goo.gl/fccQdK.

  25. https://goo.gl/fu7stz.

  26. https://www.lifeprint.com/asl101/index/sign-language-phrases.htm.

  27. Prior to translating the phrases, we preprocessed the ASL reference glosses and the ASL direct translation sentences, replacing exclamations marks, question marks, dots and periods by [EXCLAMATION], [INTERROGATION], and [DOT], respectively. We made these substitutions to avoid distortions in automatic metrics, because our MT strategy generates sentences with this representation ([EXCLAMATION], [INTERROGATION] and [DOT]).

  28. http://mango.blender.org/.

  29. http://durian.blender.org/.

  30. Because some Deaf people have difficulty in reading and writing in the spoken language of their country, it is necessary to adapt the forms to ASL so that they do not have difficulty understanding the questionnaire, which could influence the evaluation.

References

  • Araújo TMU (2012) Uma solução para geração automática de trilhas em Língua Brasileira de Sinais em conteúdos multimídia (A solution for automatic generation of Brazilian Sign Language tracks in multimedia contents), PhD Thesis, Universidade Federal do Rio Grande do Norte, Brazil

  • Araújo TMU, Ferreira FLS, Silva DANS, Oliveira LD, Falcão EL, Domingues LA, Martins VF, Portela IAC, Nóbrega YS, Lima HRG, Souza Filho GL, Tavares TA, Duarte AN (2014) An approach to generate and embed sign language video tracks into multimedia contents. Inf Sci 281:762–780

    Article  Google Scholar 

  • Corada (2016) Sign 4 me: a signed english translator app. http://www.corada.com/products/sign-4-me-app. Accessed 30 Nov 2016

  • Freitas C, Rocha P, Bick E (2008) Floresta sintá(c)tica: bigger, thicker and easier. In: Texeira A, de Lima VLS, Oliveira LCO, Quaresma P (eds) Computational processing of the Portuguese language. Lecture notes in computer science. Springer, Aveiro, pp 216–219

    Chapter  Google Scholar 

  • HETAH (2016) Fundación HETAH - Herramientas tecnológicas para ayuda humanitaria (HETAH foundation: technological tools for humanitarian aid). http://hetah.net/es. Accessed 29 Nov 2016

  • Huenerfauth M (2004) A multi-path architecture for machine translation of English text into American Sign Language animation. In: Proceedings of the student research workshop at HLTNAACL, Boston, MA, pp 25–30

  • Huenerfauth M (2005a) American sign language generation: multimodal NLG with multiple linguistic channels. In: Proceedings of the ACL student research workshop. Ann Arbor, MI, pp 37–42

  • Huenerfauth M (2005b) Representing coordination and non-coordination in an American Sign Language animation. In: Proceedings of the 7th international ACM SIGACCESS conference on computers and accessibility, vol 7, Baltimore, MD, pp 44–51

  • Huenerfauth M (2008) Generating american sign language animation: overcoming misconceptions and technical challenges. Univers Access Inf Soc 6:419–434

    Article  Google Scholar 

  • Huenerfauth M, Zhao M, Gu E, Allbeck J (2007) Evaluating American Sign Language generation through the participation of native ASL signers. In: Proceedings of the 9th international ACM SIGACCESS conference on computers and accessibility: assets, vol 7, pp 211–218

  • Lima MACB (2015) Tradução Automática com Adequação Sintático-Semântica para LIBRAS (Machine translation with syntactic-semantic adequacy for LIBRAS), Master Thesis, Universidade Federal da Paraíba, Brazil

  • Lima MACB, Araújo TMU, Oliveira ES (2015) Incorporation of syntactic-semantic aspects in a LIBRAS machine translation service to multimedia platforms. In: Proceedings of the 21st Brazilian symposium on multimedia and the web, Webmedia 2015, Manaus, Brazil, pp 133–140

  • Liu CH, Silva CC, Wang L, Way A (2018) Pivot machine translation using Chinese as pivot language. In: CWMT 2018: proceedings of the 14th China workshop on machine translation, Wuyishan, China, pp 1–12

  • López-Ludeña V, San-Segundo R, Martín R, Sánchez D, Garcia A (2011) Evaluating a speech communication system for deaf people. IEEE Lat Am Trans 9:565–570

    Article  Google Scholar 

  • López-Ludeña V, San-Segundo R, Morcillo CG, López JC, Pardo Muñoz JM (2013) Increasing adaptability of a speech into sign language translation system. Expert Syst Appl 40:1312–1322

    Article  Google Scholar 

  • López-Ludeña V, González-Morcillo C, López JC, Ferreiro E, Ferreiros J, San-Segundo R (2014a) Methodology for developing an advanced communications system for the deaf in a new domain. Knowl Based Syst 52:240–252

    Article  Google Scholar 

  • López-Ludeña V, González-Morcillo C, López JC, Ferreiro E, Ferreiros J, San-Segundo R (2014b) Translating bus information into sign language for deaf people. Eng Appl Artif Intell 32:258–269

    Article  Google Scholar 

  • Morrissey S, Way A (2005) An example-based approach to translating sign language. In: Proceedings of the second workshop on example-based machine translation, Phuket, Thailand, pp 109–116

  • Morrissey S, Way A (2013) Manual labour: tackling machine translation for sign languages. Mach Trans 27(1):25–64

    Article  Google Scholar 

  • Niessen S, Och F-J, Leusch G, Ney H (2000) An evaluation tool for machine translation: fast evaluation for machine translation research. In: Proceedings of the second international conference on language resources & evaluation (LREC), Athens, Greece, pp 39-45

  • Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th conference of the association for computational linguistics, Philadelphia, PA, pp 311–3185

  • Shoaib U, Ahmad N, Prinetto P, Tiotto G (2014) Integrating MultiWordNet with Italian sign language lexical resources. Expert Syst Appl 41:2300–2308

    Article  Google Scholar 

  • Stein D, Dreuw P, Ney H, Morrissey S, Way A (2007) Hand in hand: automatic sign language to speech translation. In: Proceedings of TMI 2007, Skövde, pp 214–220

  • Stokoe WC Jr (2005) Sign language structure: an outline of the visual communication systems of the American deaf. J Deaf Stud Deaf Educ 10:3–37

    Article  Google Scholar 

  • Stumpf MR (2000) Língua de Sinais: escrita dos surdos na Internet (Sign language: deaf writing on the Internet). In: Proccedings of the V Congresso Ibero-Americano de Informática na Educação: RIBIE. Vina del Mar

  • Su HY, Wu CH (2009) Improving structural statistical machine translation for sign language with small corpus using thematic role templates as translation memory. Trans Audio Speech Lang Proc 17:1305–1315

    Article  Google Scholar 

  • Tschare G (2016) The sign language avatar project. Innovative practice 2016. http://goo.gl/5RCkAc. Accessed 30 Nov 2016

  • van Zijl L, Barker D (2003) South African sign language machine translation system. In: Proceedings of the 2nd international conference on computer graphics, virtual reality, visualisation and interaction in Africa, Cape Town, pp 49–52

  • van Zijl L, Combrink A (2006) The South African sign language machine translation project: issues on non-manual sign generation. In: Proceedings of the 2006 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries, Gordon’s Bay, pp 127–134

  • van Zijl L, Olivrin G (2008) South African sign language assistive translation. In: Proceedings of the IASTED international conference on telehealth/assistive technologies, Baltimore, pp 3–7

  • Wauters LN (2005) Reading comprehension in deaf children: the impact of the mode of acquisition of word meanings. EAC, Research Centre on Atypical Communication. Radboud University, Nijmegen

    Google Scholar 

  • WHO (2017) Deafness and hearing loss, fact sheet. World Health Organization. http://www.who.int/mediacentre/factsheets/fs300/en. Accessed 01 Dec 2017

Download references

Acknowledgements

We would like to thank the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior of Brazil (CAPES) for financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tiago M. U. de Araújo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Costa, R.E.O., de Araújo, T.M.U., Lima, M.A.C.B. et al. Towards an open platform for machine translation of spoken languages into sign languages. Machine Translation 33, 315–348 (2019). https://doi.org/10.1007/s10590-019-09238-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-019-09238-5

Keywords

Navigation