Abstract
Tailored support is crucial for deaf and hearing-impaired children to overcome learning difficulties, particularly during primary education. The absence of listening profoundly hinders the progression of the learning journey, as it plays a pivotal role in language acquisition. Employing assistive technology is one approach to address this issue in the field of education. This paper introduces RSA, an interactive system designed for the recognition and simulation of letters in Arabic Sign Language. Our system’s objective is to enrich language learning in an engaging manner. RSA utilizes artificial intelligence to identify and recognize the gestures corresponding to Arabic letters in real-time. Additionally, the system has the capability to replicate these letters through the utilization of a robotic arm. Thanks to its simplicity, the system holds promise in enhancing the acquisition of Arabic sign language skills for deaf children.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Availability of data and materials
The source code can be obtained from the corresponding author upon a request.
References
Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet and Information Systems (TIIS), 14(2), 631–653. https://doi.org/10.3837/tiis.2020.02.009
Afraimovich, V., Gong, X., & Rabinovich, M. (2015). Sequential memory: Binding dynamics. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(10), 103118. https://doi.org/10.1063/1.4932563
Al-Ahdal, M., & Nooritawati, M. (2012). Review in sign language recognition systems. 2012 IEEE Symposium on Computers & Informatics (ISCI) (pp. 52–57). IEEE. https://doi.org/10.1109/ISCI.2012.6222666
Al-Barham, M., Alsharkawi, A., Al-Yaman, M., Al-Fetyani, M., Elnagar, A., SaAleek, A., & Al-Odat, M. (2023). RGB Arabic Alphabets Sign Language Dataset. Retrieved April 11, 2024, from arXiv:2301.11932
Al-Fityani, K., & Padden, C. (2008). A lexical comparison of sign languages in the arab world. Sign Languages: spinning and unraveling the past, present and future, 2–14, (Editora Arara Azul Florianopolis, Brazil, December 2006)
Al-Nafjan, A., Al-Arifi, B., & Al-Wabil, A. (2015). Design and development of an educational arabic sign language mobile application: Collective impact with tawasol. Universal Access in Human-Computer Interaction. Access to Interaction (pp. 319–326). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-20681-3_30
Alzohairi, R., Alghonaim, R., Alshehri, W., & Aloqeely, S. (2018). Image based arabic sign language recognition system. International Journal of Advanced Computer Science and Applications, 9(3), 185–194. https://doi.org/10.14569/IJACSA.2018.090327
Antia, S. D., Jones, P. B., Reed, S., & Kreimeyer, K. H. (2009). Academic status and progress of deaf and hard-of-hearing students in general education classrooms. Journal of Deaf Studies and Deaf Education, 14(3), 293–311. https://doi.org/10.1093/deafed/enp009
Balaha, M., El-Kady, S., Balaha, H., Salama, M., Emad, E., Hassan, M., & Saafan, M. (2023). A vision-based deep learning approach for independent-users arabic sign language interpretation. Multimedia Tools and Applications, 82(5), 6827–6827. https://doi.org/10.1007/s11042-022-13423-9
Basiri, S., Taheri, A., Meghdari, A., & Alemi, M. (2021). Design and implementation of a robotic architecture for adaptive teaching: A case study on Iranian sign language. Journal of Intelligent & Robotic Systems, 102(2), 48. https://doi.org/10.1007/s10846-021-01413-2
Bhatt, D., Patel, C., Talsania, H., Patel, J., Vaghela, R., Pandya, S., & Ghayvat, H. (2021). CNN variants for computer vision: History, architecture. Application. Challenges and Future Scope. Electronics, 10(20), 2470. https://doi.org/10.3390/electronics10202470
Borgna, G., Convertino, C., Marschark, M., Morrison, C., & Rizzolo, K. (2011). Enhancing deaf students’ learning from sign language and text: Metacognition, modality, and the effectiveness of content scaffolding. Journal of Deaf Studies and Deaf Education, 16(1), 79–100. https://doi.org/10.1093/deafed/enq036
Cheok, M., Omar, Z., & Jaward, M. (2019). A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics, 10, 131–153. https://doi.org/10.1007/s13042-017-0705-5
Dabre, K., & Dholay, S. (2014). Machine learning model for sign language interpretation using webcam images. 2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA) (pp. 317–321). Mumbai, India. https://doi.org/10.1109/CSCITA.2014.6839279
Debevc, M., Kosec, P., & Holzinger, A. (2011). Improving multimodal web accessibility for deaf people: Sign language interpreter module. Multimedia Tools and Applications, 54, 181–199. https://doi.org/10.1007/s11042-010-0529-8
Dye, M.W., Hauser, P.C., & Bavelier, D. (2008). Visual attention in deaf children and adults. Deaf cognition: Foundations and outcomes, 250–263, (Oxford University Press)
Ebling, S., & Glauert, J. (2016). Building a Swiss German sign language avatar with JASigning and evaluating it among the deaf community. Universal Access in the Information Society, 15, 577–587. https://doi.org/10.1007/s10209-015-0408-1
Edwards, L. (2010). 28 learning disabilities in deaf and hard-of-hearing children. The Oxford Handbook of Deaf Studies, Language, and Education, Vol. 2 , 425, (Oxford University Press)
Elatawy, S. M., Hawa, D. M., Ewees, A. A., & Saad, A. M. (2020). Recognition system for alphabet arabic sign language using neutrosophic and fuzzy c-means. Education and Information Technologies, 25(6), 5601–5616. https://doi.org/10.1007/s10639-020-10184-6
El-Bendary, N., Zawbaa, H., Daoud, M., Hassanien, A., & Nakamatsu, K. (2010). ArSLAT: Arabic Sign Language Alphabets Translator. 2010 international conference on computer information systems and industrial management applications (CISIM) (pp. 590–595). IEEE. https://doi.org/10.1109/CISIM.2010.5643519
Elliott, R., Glauert, J., Kennaway, J., Marshall, I., & Safar, E. (2008). Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Universal Access in the Information Society, 6, 375–391. https://doi.org/10.1007/s10209-007-0102-z
Elliott, R., Powers, A., Funderburg, R. (1988). Learning disabled hearing-impaired students: Teacher survey. The Volta Review, 90(6), 277–286, Retrieved from. https://psycnet.apa.org/record/1989-24143-001
Estrada Jiménez, L., Benalcázar, M., & Sotomayor, N. (2016). Gesture Recognition and Machine Learning Applied to Sign Language Translation. VII Latin American Congress on Biomedical Engineering CLAIB 2016 (pp. 233–236). Bucaramanga, Santander, Colombia, October 26th-28th. https://doi.org/10.1007/978-981-10-4086-3_59
Halawani, S. (2008). Arabic sign language translation system on mobile devices. IJCSNS International Journal of Computer Science and Network Security, 8(1), 251–256, (Citeseer)
Ibrahim, N., Selim, M., & Zayed, H. (2018). An Automatic Arabic Sign Language Recognition System (ArSLRS). Journal of King Saud University-Computer and Information Sciences, 30(4), 470–477. https://doi.org/10.1016/j.jksuci.2017.09.007
ICLD. (1987). Learning disabilities: A report to the us congress. Interagency Committee on Learning Disabilities (US): Department of Health and Human Services.
Iman, M., Arabnia, H., & Rasheed, K. (2023). A review of deep transfer learning and recent advancements. Technologies, 11(2), 40. https://doi.org/10.3390/technologies11020040
Imashev, A., Kydyrbekova, A., Oralbayeva, N., Kenzhekhan, A., & Sandygulova, A. (2024). Learning sign language with mixed reality applications-the exploratory case study with deaf students. Education and Information Technologies, 1–32. https://doi.org/10.1007/s10639-024-12525-1
Jemni, M., & Elghoul, O. (2008). Using ICT to Teach Sign Language. 2008 Eighth IEEE International Conference on Advanced Learning Technologies (pp. 995–996). https://doi.org/10.1109/ICALT.2008.320
Kamruzzaman, M. (2020). Arabic sign language recognition and generating arabic speech using convolutional neural network. Wireless Communications and Mobile Computing, 2020, 3685614. https://doi.org/10.1155/2020/3685614
Knoors, H., & Marschark, M. (2014). Teaching deaf learners: Psychological and developmental foundations. New York: Oxford University Press.
Kose, H., Akalin, N., & Uluer, P. (2014). Socially interactive robotic platforms as sign language tutors. International Journal of Humanoid Robotics, 11(01), 1450003. https://doi.org/10.1142/S0219843614500030
Köse, H., Uluer, P., Akalı, N., Yorgancı, R., Özkul, A., & Ince, G. (2015). The effect of embodiment in sign language tutoring with assistive humanoid robots. International Journal of Social Robotics, 7, 537–548. https://doi.org/10.1007/s12369-015-0311-1
Kose, H., & Yorganci, R. (2011). Tale of a robot: Humanoid robot assisted sign language tutoring. 2011 11th IEEE-RAS International Conference on Humanoid Robots (pp. 105–111). Bled, Slovenia. https://doi.org/10.1109/Humanoids.2011.6100846
Kose, H., Yorganci, R., Algan, E., & Syrdal, D. (2012). Evaluation of the robot assisted sign language tutoring using video-based studies. International Journal of Social Robotics, 4, 273–283. https://doi.org/10.1007/s12369-012-0142-2
Kumar, S., Wangyal, T., Saboo, V., & Srinath, R. (2018). Time Series Neural Networks for Real Time Sign Language Translation. 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 243–248). Orlando, FL, USA. https://doi.org/10.1109/ICMLA.2018.00043
Kurniawan, Y.I., Yulianti, U.H., Yulianita, N.G., & Pratama, A.P. (2022). English learning educational games for hearing and speech impairment students at slb b yakut purwokerto. Jurnal Teknik Informatika (Jutif), 3(3), 781–790, Retrieved from, http://jutif.if.unsoed.ac.id/index.php/jurnal/article/view/317
Langevin, G. (2024). Inmoov. Retrieved April 02, 2024, from https://inmoov.fr/
Laughton, J. (1989). The learning disabled, hearing impaired student: Reality, myth, or overextension? Topics in Language Disorders, 9(4), 70–79, Retrieved from https://journals.lww.com/topicsinlanguagedisorders/citation/1989/09000/The_learning_disabled,_hearing_impaired_student_.8.aspx
Lee, C., Ng, K., Chen, C., Lau, H., Chung, S., & Tsoi, T. (2021). American sign language recognition and training method with recurrent neural network. Expert Systems with Applications, 167, 114403. https://doi.org/10.1016/j.eswa.2020.114403
Lu, P., & Huenerfauth, M. (2014). Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation. Computer Speech & Language, 28(3), 812–831. https://doi.org/10.1016/j.csl.2013.10.004
Luccio, F., & Gaspari, D. (2020). Learning Sign Language from a Sanbot Robot. Proceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good (pp. 138–143). https://doi.org/10.1145/3411170.3411252
Luqman, H., & Mahmoud, S. (2019). Automatic translation of Arabic text-to- Arabic sign language. Universal Access in the Information Society, 18, 939–951. https://doi.org/10.1007/s10209-018-0622-8
Marschark, M., & Knoors, H. (2012). Educating deaf children: Language, cognition, and learning. Deafness & Education International, 14(3), 136–160. https://doi.org/10.1179/1557069X12Y.0000000010
McDonald, J., Wolfe, R., Schnepp, J., Hochgesang, J., Jamrozik, D. G., Stumbo, M., & Thomas, F. (2016). An automated technique for real-time production of lifelike animations of American Sign Language. Universal Access in the Information Society, 15, 551–566. https://doi.org/10.1007/s10209-015-0407-2
Mohammad, H., Tamimi, H., & Abuamara, F. (2022). An Educational Arabic sign language mobile application for children with hearing impairment. International Journal of Interactive Mobile Technologies, 16(20), 114–129. https://doi.org/10.3991/ijim.v16i20.32427
Nawshin, S., Saif, N., Mohammad, A., & Jameel, M. (2020). Protik: Bangla Sign Language Teaching Aid for Children with Impaired Hearing. 2020 IEEE Region 10 Symposium (TENSYMP) (pp. 440–443). Dhaka, Bangladesh. https://doi.org/10.1109/TENSYMP50017.2020.9230872
Nedjar, I., Sekkil, H.M., Mebrouki, M., & Bekkaoui, M. (2022). A Comparison of Convolutional Neural Network Models for Driver Fatigue Detection. 2022 7th International conference on Image and Signal Processing and their Applications (ISPA) (pp. 1–6). Mostaganem, Algeria. https://doi.org/10.1109/ISPA54004.2022.9786296
Peng, P., & Wang, J. (2020). How to fine-tune deep neural networks in few-shot learning ? Retrieved April 11, 2024, from arXiv:2012.00204
Rastgoo, R., Kiani, K., & Escalera, S. (2021). Sign language recognition: A deep survey. Expert Systems with Applications, 164, 113794. https://doi.org/10.1016/j.eswa.2020.113794
Razalli, A.R., Rakoro, J.U., Ariffin, A., Hashim, A.T., & Mamat, N. (2019). Factors affecting sign language acquisition in hearing impaired learners during primary education. Religación: Revista de Ciencias Sociales y Humanidades, 4(15), 202–209, Retrieved from https://dialnet.unirioja.es/servlet/articulo?codigo=8274024
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. (2018). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510–4520). Retrieved from https://openaccess.thecvf.com/content_cvpr_2018/html/Sandler_MobileNetV2_Inverted_Residuals_CVPR_2018_paper.html
Schaffter, T., Buist, D., Lee, C., Nikulin, Y., Ribli, D., Guan, Y., et al. (2020). Evaluation of combined artificial intelligence and radiologist assessment to interpret screening mammograms. JAMA Network Open, 3(3), e200265–e200265. https://doi.org/10.1001/jamanetworkopen.2020.0265
Sharma, S., & Singh, S. (2021). Vision-based hand gesture recognition using deep learning for the interpretation of sign language. Expert Systems with Applications, 182, 115657. https://doi.org/10.1016/j.eswa.2021.115657
Sreemathy, R., Turuk, M., Kulkarni, I., & Khurana, S. (2023). Sign language recognition using artificial intelligence. Education and Information Technologies, 28(5), 5259–5278. https://doi.org/10.1007/s10639-022-11391-z
Taylor, L., & Nitschke, G. (2018). Improving Deep Learning with Generic Data Augmentation. 2018 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1542–1547). Bangalore, India. https://doi.org/10.1109/SSCI.2018.8628742
Tharwat, A., Gaber, T., Hassanien, A., Shahin, M., & Refaat, B. (2015). SIFTBased Arabic sign language recognition system. Afro-European Conference for Industrial Advancement: Proceedings of the First International Afro-European Conference for Industrial Advancement AECIA 2014 (pp. 359–370). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-13572-4_30
Trezek, B. J., & Malmgren, K. W. (2005). The efficacy of utilizing a phonics treatment package with middle school deaf and hard-of-hearing students. Journal of Deaf Studies and Deaf Education, 10(3), 256–271. https://doi.org/10.1093/deafed/eni028
Uluer, P., & Akalın, N., & Köse, H. (2015). A new robotic platform for sign language tutoring. International Journal of Social Robotics, 7, 571–585. https://doi.org/10.1007/s12369-015-0307-x
Vilhjálmsson, H., Kopp, S., Marsella, S., & Thorisson, K. (2011). Intelligent virtual agents: 11th international conference, iva 2011, Reykjavik, Iceland, September 15–17, 2011. proceedings (Vol. 6895)
Wadhawan, A., & Kumar, P. (2020). Deep learning-based sign language recognition system for static signs. Neural Computing and Applications, 32, 7957–7968. https://doi.org/10.1007/s00521-019-04691-y
Wang, Y., Li, Y., Song, Y., & Rong, X. (2020). The influence of the activation function in a convolution neural network model of facial expression recognition. Applied Sciences, 10(5), 1897. https://doi.org/10.3390/app10051897
Wei, C., Zhou, W., Pu, J., & Li, H. (2019). Deep grammatical multi-classifier for continuous sign language recognition. 2019 IEEE fifth international conference on multimedia big data (BigMM) (pp. 435–442). Singapore. https://doi.org/10.1109/BigMM.2019.00027
Welsh, M., Parke, R. D., Widaman, K., & O’Neil, R. (2001). Linkages between children’s social and academic competence: A longitudinal analysis. Journal of School Psychology, 39(6), 463–482. https://doi.org/10.1016/S0022-4405(01)00084-X
WHO (2024). Deafness and hearing loss. Retrieved April 11, 2024, from https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss
Wiley, S. (2012). Children who are deaf or hard of hearing with additional learning needs. Perspectives on Hearing and Hearing Disorders in Childhood, 22(2), 57–67. https://doi.org/10.1044/hhdc22.2.57
Yin, K., & Read, J. (2020). Better Sign Language Translation with STMCTransformer. Retrieved April 11, 2024, from arXiv:2004.00588
Zinkevich, M., Weimer, M., Li, L., & Smola, A. (2010). Parallelized stochastic gradient descent. Advances in Neural Information Processing Systems, 23. Retrieved from. https://proceedings.neurips.cc/paper_files/paper/2010/file/abea47ba24142ed16b7d8fbf2c740e0d-Paper.pdf
Acknowledgements
The authors express their gratitude to FAB LAB (FABrication LABoratory) at the Higher School of Applied Sciences of Tlemcen, Algeria, for providing the necessary materials used in the experiments.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The conception or design of the work, data analysis, and interpretation were collaborative efforts involving all authors. Imane Nedjar provided the first draft of the manuscript, and final approval for the version was granted by all authors.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no conflict of interest.
Ethical Statement
Not applicable.
Informed Consent
Not applicable.
Statement Regarding Research Involving Human Participants and/or Animals
Not applicable.
Consent to Participate
Not applicable.
Consent to Publish
All coauthors have consented for publication.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Nedjar, I., M’hamedi, M. Interactive system based on artificial intelligence and robotic arm to enhance arabic sign language learning in deaf children. Educ Inf Technol 29, 24563–24580 (2024). https://doi.org/10.1007/s10639-024-12826-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-024-12826-5