Skip to main content
Log in

Recognition of Indian Sign Language (ISL) Using Deep Learning Model

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

An efficient sign language recognition system (SLRS) can recognize the gestures of sign language to ease the communication between the signer and non-signer community. In this paper, a computer-vision based SLRS using a deep learning technique has been proposed. This study has primary three contributions: first, a large dataset of Indian sign language (ISL) has been created using 65 different users in an uncontrolled environment. Second, the intra-class variance in dataset has been increased using augmentation to improve the generalization ability of the proposed work. Three additional copies for each training image are generated in this paper, by using three different affine transformations. Third, a novel and robust model using Convolutional Neural Network (CNN) have been proposed for the feature extraction and classification of ISL gestures. The performance of this method is evaluated on a self-collected ISL dataset and publicly available dataset of ASL. For this total of three datasets have been used and the achieved accuracy is 92.43, 88.01, and 99.52%. The efficiency of this method has been also evaluated in terms of precision, recall, f-score, and time consumed by the system. The results indicate that the proposed method shows encouraging performance compared with existing work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Availability of Data and Material

The authors declare that no data or material was taken illegally. However, publically available dataset was taken for implementation. The dataset generated in the study are available from corresponding author on request base only after copyrights are reserved.

References

  1. Census of India 2011: Disabled population. Available: http://enabled.in/wp/census-of-india-2011-disabledpopulation. Accessed 30 Apr 2021.

  2. Johnson, J. E., & Johnson, R. J. (2008). Assessment of regional language varieties in indian sign language. SIL International, Dallas, Texas, vol 2008, (pp. 1–121).

  3. Kumar, D. A., Sastry, A. S. C. S., Kishore, P. V. V., & Kumar, E. K. (2018). 3D sign language recognition using spatio temporal graph kernels. Journal of King Saud University-Computer and Information Sciences.

  4. Sharma, S., & Singh, S. (2020). Vision-based sign language recognition system: A Comprehensive Review. In: IEEE International Conference on Inventive Computation Technologies (ICICT), (pp. 140–144).

  5. Sharma, S., & Singh, S. (2021). Vision-based hand gesture recognition using deep learning for the interpretation of sign language. Expert Systems with Applications, 182, 115657.

    Article  Google Scholar 

  6. Cheok, M. J., Omar, Z., & Jaward, M. H. (2019). A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics, 10(1), 131–153.

    Article  Google Scholar 

  7. Gangrade, J., & Bharti, J. (2020). Vision-based hand gesture recognition for indian sign language using convolution neural network. IETE Journal of Research, 1–10.

  8. Sharma, S., & Singh, S. (2019). An analysis of reversible data hiding algorithms for encrypted domain. In: 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), (pp. 644–648). IEEE.

  9. 2021, ISL Dictionary Launch, Indian Sign Language Research and Training centre (ISLRTC). Available: http://www.islrtc.nic.in/isl-dictionary-launch. Accessed 30 Apr 2021.

  10. Kakoty, N. M., & Sharma, M. D. (2018). Recognition of sign language alphabets and numbers based on hand kinematics using A data glove. Procedia Computer Science, 133, 55–62.

    Article  Google Scholar 

  11. Suri, K., & Gupta, R. (2019). Convolutional neural network array for sign language recognition using wearable IMUs. In: 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), (pp. 483–488).

  12. Rewari, H., Dixit, V., Batra, D., & Hema, N. (2018). Automated sign language interpreter. In: Eleventh International Conference on Contemporary Computing (IC3), (pp. 1–5).

  13. Chong, T. W., & Kim, B. J. (2020). American sign language recognition system using wearable sensors with deep learning approach. The Journal of the Korea Institute of Electronic Communication Sciences, 15(2), 291–298.

    Google Scholar 

  14. Gupta, R., & Kumar, A. (2020). Indian sign language recognition using wearable sensors and multi-label classification. Computers & Electrical Engineering, 90, 106898.

    Article  Google Scholar 

  15. Das, S. P., Talukdar, A. K., & Sarma, K. K. (2015). Sign language recognition using facial expression. Procedia Computer Science, 58, 210–216.

    Article  Google Scholar 

  16. Tripathi, K., & Nandi, N. B. G. (2015). Continuous Indian sign language gesture recognition and sentence formation. Procedia Computer Science, 54, 523–531.

    Article  Google Scholar 

  17. Lee, G. C., Yeh, F. H., & Hsiao, Y. H. (2016). Kinect-based Taiwanese sign-language recognition system. Multimedia Tools and Applications, 75(1), 261–279.

    Article  Google Scholar 

  18. Ansari, Z. A., & Harit, G. (2016). Nearest neighbour classification of Indian sign language gestures using kinect camera. Sadhana, 41(2), 161–182.

    Article  MathSciNet  Google Scholar 

  19. Beena, M. V., Namboodiri, M. A., & Dean, P. G. (2017). Automatic sign language finger spelling using convolution neural network: Analysis. International Journal of Pure and Applied Mathematics, 117(20), 9–15.

    Google Scholar 

  20. Kumar, E. K., Kishore, P. V. V., Sastry, A. S. C. S., Kumar, M. T. K., & Kumar, D. A. (2018). Training CNNs for 3-D sign language recognition with color texture coded joint angular displacement maps. IEEE Signal Processing Letters, 25(5), 645–649.

    Article  Google Scholar 

  21. Müller, M., Röder, T., Clausen, M., Eberhardt, B., Krüger, B., & Weber, A. (2007). Documentation mocap database hdm05.

  22. Rao, G. A., & Kishore, P. V. V. (2018). Selfie video based continuous Indian sign language recognition system. Ain Shams Engineering Journal, 9(4), 1929–1939.

    Article  Google Scholar 

  23. Xie, B., He, X., & Li, Y. (2018). RGB-D static gesture recognition based on convolutional neural network. The Journal of Engineering, 1515–1520.

  24. Pugeault, N., & Bowden, R. (2011). Spelling it out: Real-time ASL fingerspelling recognition. In: IEEE International Conference on Computer Vision Workshops (ICCV workshops), (pp. 1114–1119).

  25. Elpeltagy, M., Abdelwahab, M., Hussein, M. E., Shoukry, A., Shoala, A., & Galal, M. (2018). Multi-modality-based Arabic sign language recognition. IET Computer Vision, 12(7), 1031–1039.

    Article  Google Scholar 

  26. Ibrahim, N. B., Selim, M. M., & Zayed, H. H. (2018). An automatic arabic sign language recognition system (ArSLRS). Journal of King Saud University-Computer and Information Sciences, 30(4), 470–477.

    Article  Google Scholar 

  27. Kumar, P., Roy, P. P., & Dogra, D. P. (2018). Independent bayesian classifier combination based sign language recognition using facial expression. Information Sciences, 428, 30–48.

    Article  MathSciNet  Google Scholar 

  28. Jose, H., & Julian, A. (2019). Tamil sign language translator—An assistive system for hearing-and speech-impaired people. In: Information and Communication Technology for Intelligent Systems, Springer, (pp. 249–257).

  29. Ferreira, P. M., Cardoso, J. S., & Rebelo, A. (2019). On the role of multimodal learning in the recognition of sign language. Multimedia Tools and Applications, 78(8), 10035–10056.

    Article  Google Scholar 

  30. Sruthi, C. J., & Lijiya, A. (2019). Signet: A deep learning based indian sign language recognition system. In: 2019 International Conference on Communication and Signal Processing (ICCSP), (pp. 596–600).

  31. Wadhawan, A., & Kumar, P. (2020). Deep learning-based sign language recognition system for static signs. Neural Computing and Applications, 1–12.

  32. Kumar, A., & Kumar, R. (2021). A novel approach for ISL alphabet recognition using Extreme Learning Machine. International Journal of Information Technology, 13(1), 349–357.

    Article  Google Scholar 

  33. Sharma, A., Sharma, N., Saxena, Y., Singh, A., & Sadhya, D. (2020). Benchmarking deep neural network approaches for Indian Sign Language recognition. Neural Computing and Applications, 1–12.

  34. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., & Rabinovich, A. (2015). Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 1–9).

  35. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint. arXiv:1409.1556.

  36. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 2818–2826).

  37. Fukushima, K. (1980). A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36, 193–202.

    Article  Google Scholar 

  38. Rahman, M. M., Islam, M. S., Sassi, R., & Aktaruzzaman, M. (2019). Convolutional neural networks performance comparison for handwritten bengali numerals recognition. SN Applied Sciences, 1(12), 1–11.

    Google Scholar 

  39. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.

    Article  Google Scholar 

  40. Perez, L., & Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint. arXiv:1712.04621.

  41. Triesch, J., & Von Der Malsburg, C. (2001). A system for person-independent hand posture recognition against complex backgrounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(12), 1449–1453.

    Article  Google Scholar 

  42. Rokade, Y. I., & Jadav, P. M. (2017). Indian sign language recognition system. International Journal of engineering and Technology, 9(3), 189–196.

    Article  Google Scholar 

  43. Kaur, J., & Krishna, C. R. (2019). An efficient Indian sign language recognition system using sift descriptor. International Journal of Engineering and Advanced Technology (IJEAT), 8(6).

  44. Kumar, D. A., Kishore, P. V. V., Sastry, A. S. C. S., & Swamy, P. R. G. (2016). Selfie continuous sign language recognition using neural network. In: 2016 IEEE Annual India Conference (INDICON), (pp. 1–6).

  45. Dour, G., & Sharma, S. (2016). Recognition of alphabets of indian sign language by Sugeno type fuzzy neural network. Pattern Recognit Lett, 30, 737–742.

    Google Scholar 

  46. Athira, P. K., Sruthi, C. J., & Lijiya, A. (2019). A signer independent sign language recognition with co-articulation elimination from live videos: An indian scenario. Journal of King Saud University-Computer and Information Sciences.

  47. Just, A., Rodriguez, Y., & Marcel, S. (2006). Hand posture classification and recognition using the modified census transform. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), (pp. 351–356).

  48. Kelly, D., McDonald, J., & Markham, C. (2010). A person independent system for recognition of hand postures used in sign language. Pattern Recognition Letters, 31(11), 1359–1368.

    Article  Google Scholar 

  49. Dahmani, D., & Larabi, S. (2014). User-independent system for sign language finger spelling recognition. Journal of Visual Communication and Image Representation, 25(5), 1240–1250.

    Article  Google Scholar 

  50. Kaur, B., Joshi, G., & Vig, R. (2017). Identification of ISL alphabets using discrete orthogonal moments. Wireless Personal Communications, 95(4), 4823–4845.

    Article  Google Scholar 

  51. Sahoo, J. P., Ari, S., & Ghosh, D. K. (2018). Hand gesture recognition using DWT and F-ratio based feature descriptor. IET Image Processing, 12(10), 1780–1787.

    Article  Google Scholar 

  52. Joshi, G., Vig, R., & Singh, S. (2018). DCA-based unimodal feature-level fusion of orthogonal moments for Indian sign language dataset. IET Computer Vision, 12(5), 570–577.

    Article  Google Scholar 

Download references

Funding

Authors declare that no funding was received for this research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sakshi Sharma.

Ethics declarations

Conflict of Interest

The authors declares that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, S., Singh, S. Recognition of Indian Sign Language (ISL) Using Deep Learning Model. Wireless Pers Commun 123, 671–692 (2022). https://doi.org/10.1007/s11277-021-09152-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-021-09152-1

Keywords

Navigation