Skip to main content
Log in

RETRACTED ARTICLE: Machine learning based sign language recognition: a review and its research frontier

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

This article was retracted on 14 July 2022

This article has been updated

Abstract

In the recent past, research in the field of automatic sign language recognition using machine learning methods have demonstrated remarkable success and made momentous progression. This research article investigates the impact of machine learning in the state of the art literature on sign language recognition and classification. It highlights the issues faced by the present recognition system for which the research frontier on sign language recognition intends the solutions. In this article, around 240 different approaches have been compared that explore sign language recognition for recognizing multilingual signs. The research done by various authors is also studied, and some of the important research articles are also discussed in this article. Based on the inferences from these approaches, this article discussed how machine learning methods could benefit the field of automatic sign language recognition and the potential gaps that machine learning approaches need to address for the real-time sign language recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Change history

References

  • Admasu YF, Raimond K (2010) Ethiopian sign language recognition using Artificial Neural Network. In: IEEE 10th International Conference on Intelligent Systems Design and Applications (ISDA), pp 995–1000. https://doi.org/10.1109/ISDA.2010.5687057

  • Agarwal A, Thakur MK (2013) Sign language recognition using Microsoft Kinect. In: IEEE Sixth International Conference on Contemporary Computing (IC3), pp 181–185. https://doi.org/10.1109/IC3.2013.6612186

  • Ahmed AA, Aly S (2014) Appearance-based arabic sign language recognition using hidden markov models. In: IEEE International Conference on Engineering and Technology (ICET), pp 1–6. https://doi.org/10.1109/ICEngTechnol.2014.7016804

  • Akmeliawati R, Dadgostar F, Demidenko S, Gamage N, Kuang YC, Messom C, Ooi M, Sarrafzadeh A, SenGupta G (2009) Towards real-time sign language analysis via markerless gesture tracking. In: IEEE Instrumentation and Measurement Technology Conference, pp 1200–1204. https://doi.org/10.1109/IMTC.2009.5168637

  • Almeida SG, Guimarães FG, Ramírez JA (2014) Feature extraction in Brazilian sign language recognition based on phonological structure and using RGB-D sensors. Expert Syst Appl 41(16):7259–7271

    Article  Google Scholar 

  • Al-Rousan M, Assaleh K, Tala’a A (2009) Video-based signer-independent Arabic sign language recognition using hidden Markov models. Appl Soft Comput 9(3):990–999

  • Al-Rousan M, Al-Jarrah O, Al-Hammouri M (2010) Recognition of dynamic gestures in Arabic sign language using two stages hierarchical scheme. Int J Knowl Based Intell Eng Syst 14(3):139–152

    Google Scholar 

  • Anderson R, Wiryana F, Ariesta MC, Kusuma GP (2017) Sign language recognition application systems for deaf-mute people: a review based on input-process-output. Procedia Comput Sci 116:441–448

    Article  Google Scholar 

  • Appenrodt J, Al-Hamadi A, Michaelis B (2010) Data gathering for gesture recognition systems based on single color-, stereo color-and thermal cameras. Int J Signal Process Image Process Pattern Recogn 3(1):37–50

    Google Scholar 

  • Aran O, Burger T, Caplier A, Akarun L (2007) Sequential belief-based fusion of manual and non-manual information for recognizing isolated signs. Springer International Gesture Workshop, pp 134–144. https://doi.org/10.1007/978-3-540-92865-2_14

  • Aran O, Burger T, Caplier A, Akarun L (2009) A belief-based sequential fusion approach for fusing manual signs and non-manual signals. Pattern Recogn 42(5):812–822

    Article  MATH  Google Scholar 

  • Assaleh K, Shanableh T, Zourob M (2012) Low complexity classification system for glove-based arabic sign language recognition. Springer Neural Information Processing, pp 262–268. https://doi.org/10.1007/978-3-642-34487-9_32

  • Athitsos V, Sclaroff S (2003) Estimating 3D hand pose from a cluttered image. IEEE Comput Soc Conf Comput Vision Pattern Recogn 2:411–432

    Google Scholar 

  • Baranwal N, Nandi GC (2017) An efficient gesture based humanoid learning using wavelet descriptor and MFCC techniques. Int J Mach Learn Cybern 8(4):1369–1388

    Article  Google Scholar 

  • Barros PV, Júnior NT, BisnetoJM, Fernandes BJ, Bezerra BL, Fernandes SM (2013) An effective dynamic gesture recognition system based on the feature vector reduction for SURF and LCS. Springer International Conference on Artificial Neural Networks, pp 412–419. https://doi.org/10.1007/978-3-642-40728-4_52

  • Bauer B, Kraiss KF (2001) Towards an automatic sign language recognition system using subunits. In: Proceedings of the Gesture workshop, pp 64–75

  • Belgacem S, Chatelain C, Paquet T (2017) Gesture sequence recognition with one shot learned CRF/HMM hybrid model. Image Vis Comput 61:12–21

    Article  Google Scholar 

  • Bengio Y, Frasconi P (1996) Input-output HMMs for sequence processing. IEEE Trans Neural Netw 7(5):1231–1249

    Article  Google Scholar 

  • Bergasa LM, Mazo M, Gardel A, Sotelo MA, Boquete L (2000) Unsupervised and adaptive Gaussian skin-color model. Image Vis Comput 18(12):987–1003

    Article  Google Scholar 

  • Berretti S, Del BA, Pala P (2013) Automatic facial expression recognition in real-time from dynamic sequences of 3D face scans. Visual Comput 29(12):1333–1350

  • Bilal S, Akmeliawati R, Salami MJE, Shafie AA (2015) Dynamic approach for real-time skin detection. J Real Time Image Proc 10(2):371–385

    Article  Google Scholar 

  • Binh ND, Ejima T (2005) Hand gesture recognition using fuzzy neural network. In: Proc. ICGST Conf. Graphics, Vision and Image Process, pp. 1–6

  • Bowden R, Sarhadi M (2002) A non-linear model of shape and motion for tracking finger spelt american sign language. Image Vis Comput 20(9):597–607

    Article  Google Scholar 

  • Brand M, Oliver N, Pentland A (1997) Coupled hidden Markov models for complex action recognition. In: Proceedings of the IEEE Computer Society Conference on Computer vision and pattern recognition, pp 994–999. https://doi.org/10.1109/CVPR.1997.609450

  • Caridakis G, Karpouzis K, Drosopoulos A, Kollias S (2012) Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm. Neural Netw 36:157–166

    Article  Google Scholar 

  • Caridakis G, Asteriadis S, Karpouzis K (2014) Non-manual cues in automatic sign language recognition. Pers Ubiquit Comput 18(1):37–46

    Article  Google Scholar 

  • Celebi S, Aydin AS, Temiz TT, Arici T (2013) Gesture recognition using skeleton data with weighted dynamic time warping. InVISAPP 1:620–625

    Google Scholar 

  • Chai D, Ngan KN (1999) Face segmentation using skin-color map in videophone applications. IEEE Trans Circuits Syst Video Technol 9(4):551–564

    Article  Google Scholar 

  • Charles D, Pedlow K, McDonough S, Shek M, Charles T (2013) An evaluation of the leap motion depth sensing camera for tracking hand and fingers motion in physical therapy. In: Interactive technologies and games conference, vol 1

  • Chen F-S, Chih-Ming Fu, Huang C-L (2003) Hand gesture recognition using a real-time tracking method and hidden Markov models. Image Vis Comput 21(8):745–758

    Article  Google Scholar 

  • Cho OH, Lee ST (2014) A study about honey bee dance serious game for kids using hand gesture. Int J Multimed Ubiquitous Eng 9(6):397–404

    Article  Google Scholar 

  • Chu H, Ye S, Guo Q, Liu X (2007) Object tracking algorithm based on camshift algorithm combinating with difference in frame. IEEE International Conference on Automation and Logistics, pp 51–55. https://doi.org/10.1109/ICAL.2007.4338529

  • Chuan CH, Regina E, Guardino C (2014) American sign language recognition using leap motion sensor. In: 13th International Conference on Machine Learning and Applications (ICMLA), pp 541–544. https://doi.org/10.1109/ICMLA.2014.110

  • Coogan T, Sutherland A (2006) Transformation invariance in hand shape recognition. In: 18th IEEE International Conference on Pattern Recognition (ICPR), vol 3, pp 485–488. https://doi.org/10.1109/ICPR.2006.1134

  • Cooper H, Bowden R (2007) Sign language recognition using boosted volumetric features. In: Proceedings of the IAPR Conference on Machine Vision Applications, pp 359–362

  • Cooper H, Bowden R (2010) Sign language recognition using linguistically derived sub-units. In: Proceedings of 4th workshop on the representation and processing of sign languages: corpora and sign language technologies, pp 57–61. http://epubs.surrey.ac.uk/531457/

  • Cooper H, Holt B, Bowden R (2011) Sign language recognition. Springer Journal of Visual Analysis of Humans, pp 539–562. https://doi.org/10.1007/978-0-85729-997-0_27

  • Cooper H, Ong EJ, Pugeault N, Bowden R (2012a) Sign language recognition using sub-units. J Mach Learn Res:2205–2231. https://dl.acm.org/doi/abs/10.5555/2503308.2503313

  • Cooper H, Ong EJ, Pugeault N, Bowden R (2012b) Sign language recognition using sub-units. Springer Gesture Recognition, pp 89–118. https://dl.acm.org/doi/abs/10.5555/2503308.2503313

  • Cootes TF, Wheeler GV, Walker KN, Christopher JT (2000) Coupled-view active appearance models. In: Proceedings of the British machine vision conference, vol 1, pp 52–61. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.677.4103&rep=rep1&type=pdf

  • Cootes TF, Edwards GJ, Taylor CJ (2001) Active appearance models. IEEE Trans Pattern Anal Mach Intell 23(6):681–685

    Article  Google Scholar 

  • Cristinacce D, Cootes TF (2006) Feature detection and tracking with constrained local models. BMVC 1(2):3

    MATH  Google Scholar 

  • Cui Y, Weng J (2000) Appearance-based hand sign recognition from intensity image sequences. Comput Vis Image Underst 78(2):157–176

    Article  Google Scholar 

  • Dardas NH, Georganas ND (2011) Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques. IEEE Trans Instrum Meas 60(11):3592–3607

    Article  Google Scholar 

  • Darrell T, Pentland A (1993) Space-time gestures. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 335–340. https://doi.org/10.1109/CVPR.1993.341109

  • Deng JW, Tsui HT (2002a) A novel two-layer PCA/MDA scheme for hand posture recognition. In: Proceedings of the 16th International Conference on Pattern Recognition, vol 1, pp 283–286

  • Deng J, Tsui HT (2002b) A two-step approach based on PaHMM for the recognition of ASL. In: The Fifth Asian Conference on Computer Vision (ACCV), pp 1–6

  • Derpanis KG, Wildes RP, Tsotsos JK (2008) Definition and recovery of kinematic features for recognition of American sign language movements. Image Vis Comput 26(12):1650–1662

    Article  Google Scholar 

  • Dilsizian M, Yanovich P, Wang S, Neidle C, Metaxas DN (2014) A new framework for sign language recognition based on 3D handshape identification and linguistic modeling. LREC, pp 1924–1929. https://www.researchwithrutgers.com/en/publications/a-new-framework-for-sign-language-recognition-based-on-3d-handsha

  • Ding L, Martinez AM (2007) Recovering the linguistic components of the manual signs in american sign language. In: IEEE conference on advanced video and signal based surveillance, pp 447–452. https://doi.org/10.1109/AVSS.2007.4425352

  • Dominio F, Donadeo M, Zanuttigh P (2014) Combining multiple depth-based descriptors for hand gesture recognition. Pattern Recogn Lett 50:101–111

    Article  Google Scholar 

  • Dorner B, Hagen E (1994) Towards an American sign language interface. Artif Intell Rev 8(2-3):235–253

    Article  Google Scholar 

  • Dreuw P, Deselaers T, Rybach D, Keysers D, Ney H (2006a) Tracking using dynamic programming for appearance-based sign language recognition. In: 7th International Conference on Automatic Face and Gesture Recognition, pp 293–298. https://doi.org/10.1109/FGR.2006.107

  • Dreuw P, Deselaers T, Keysers D, Ney H (2006b) Modeling image variability in appearance-based gesture recognition. In: ECCV Workshop on Statistical Methods in Multi-Image and Video Processing, pp 7–18. http://www-i6.informatik.rwth-aachen.de/publications/download/29/DreuwPhilippeDeselaersThomasKeysersDanielNeyHermann--ModelingImageVariabilityinAppearance-BasedGestureRecognition--2006.pdf

  • Dreuw P, Rybach D, Deselaers T, Zahedi M, Ney H (2007) Speech recognition techniques for a sign language recognition system. Proc Int Conf Speech Commun Technol 60(80):2513–2516

    Google Scholar 

  • Dreuw P, Forster J, Deselaers T, Ney H (2008) Efficient approximations to model-based joint tracking and recognition of continuous sign language. In: 8th IEEE International Conference on Automatic Face & Gesture Recognition, pp 1–6. https://doi.org/10.1109/AFGR.2008.4813439

  • Elakkiya R, Selvamani K, Velumadhava Rao R, Kannan A (2012a) Fuzzy hand gesture recognition based human computer interface intelligent system. UACEE Int J Adv Comput Netw Secur 2(1):29–33 (ISSN 2250–3757)

  • Elakkiya R, Selvamani K, Kanimozhi S, Velumadhava Rao R, Senthilkumar J (2012b) An interactive system for sensory and gustatory impaired people based on hand gesture recognition. Procedia Eng 38:3166–3172

    Article  Google Scholar 

  • Elakkiya R, Selvamani K, Kanimozhi S, Velumadhava Rao R, Kannan A (2012c) Intelligent system for human computer interface using hand gesture recognition. Procedia Eng 38:3180–3191

    Article  Google Scholar 

  • Elakkiya R, Selvamani K, Kannan A (2013) An intelligent framework for recognizing sign language from continuous video sequence using boosted subunits. In: IET Forth International Conference on SEISCON, pp 297–304. https://doi.org/10.1049/ic.2013.0329

  • Elakkiya R, Selvamani K (2015a) Sign language classification and recognition: a review. Int J Appl Eng Res 33:25383–25386.

    Google Scholar 

  • Elakkiya R, Selvamani K (2015b) An active learning framework for human hand sign gestures and handling movement epenthesis using enhanced level building approach. Procedia Comput Sci 48:606–611. https://doi.org/10.1016/j.procs.2015.04.142

    Article  Google Scholar 

  • Elakkiya R, Selvamani K (2017a) Extricating manual and non-manual features for subunit level sign modelling in automatic sign language classification and recognition. J Med Syst 41(11):175

    Article  Google Scholar 

  • Elakkiya R, Selvamani K (2017) Enhanced dynamic programming approach for subunit modelling to handle segmentation and recognition ambiguities in sign language. J Parallel Distrib Comput 117:246–255

    Google Scholar 

  • El-Bendary N, Zawbaa HM, Daoud MS, Hassanien AE, Nakamatsu K (2010) Arslat: Arabic sign language alphabets translator. In: International Conference on Computer Information Systems and Industrial Management Applications (CISIM), pp 590–595. https://doi.org/10.1109/CISIM.2010.5643519

  • Elmezain M, Al-Hamadi A, Michaelis B (2008a) Real-time capable system for hand gesture recognition using hidden markov models in stereo color image sequences. J WSCG:65–72. https://dspace5.zcu.cz/handle/11025/1315

  • Elmezain M, Al-Hamadi A, Appenrodt J, Michaelis B (2008b) A hidden markov model-based continuous gesture recognition system for hand motion trajectory. In: 19th International Conference on Pattern Recognition, pp 1–4. https://doi.org/10.1109/ICPR.2008.4761080

  • Elmezain M, Al-Hamadi A, Appenrodt J, Michaelis B (2009) A hidden markov model-based isolated and meaningful hand gesture recognition. Int J Electric Comput Syst Eng 3(3):156–163

    Google Scholar 

  • Elons AS, Ahmed M, Shedid H, Tolba MF (2014) Arabic sign language recognition using leap motion sensor. In: 9th International Conference on Computer Engineering & Systems (ICCES), pp 368–373. https://doi.org/10.1109/ICCES.2014.7030987

  • Fang G, Gao X, Gao W, Chen Y (2004a) A novel approach to automatically extracting basic units from chinese sign language. In: Proceedings of the 17th International Conference on Pattern Recognition, vol 4, pp 454–457. https://doi.org/10.1109/ICPR.2004.1333800

  • Fang G, Gao W, Zhao D (2004b) Large vocabulary sign language recognition based on fuzzy decision trees. IEEE Trans Syst Man Cyberne Part A Syst Humans 34(3):305–314

    Article  Google Scholar 

  • Fang G, Gao W, Zhao D (2007) Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Trans Syst Man Cybern Part A Syst Humans 37(1):1–9

    Article  Google Scholar 

  • Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recogn 36(1):259–275

    Article  MATH  Google Scholar 

  • Feris R, Turk M, Raskar R, Tan K, Ohashi G (2004) Exploiting depth discontinuities for vision-based fingerspelling recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp 155–155. https://doi.org/10.1109/TSMCA.2004.824852

  • Fok KY, Cheng CT, Ganganath N (2015) Live demonstration: a hmm-based real-time sign language recognition system with multiple depth sensors. IEEE International Symposium on Circuits and Systems (ISCAS), pp 1904–1904. https://doi.org/10.1109/ISCAS.2015.7169037

  • Forster J, Schmidt C, Hoyoux T, Koller O, Zelle U, Piater JH, Ney H (2012) RWTH-PHOENIX-Weather: a large vocabulary sign language recognition and translation corpus. InLREC, pp 3785–3789. http://www-i6.informatik.rwth-aachen.de/publications/download/773/forster-lrec-2012.pdf

  • Forster J, Oberdörfer C, Koller O, Ney H (2013) Modality combination techniques for continuous sign language recognition. Springer Iberian Conference on pattern recognition and image analysis, pp 89–99. https://doi.org/10.1007/978-3-642-38628-2_10

  • Forster J, Schmidt C, Koller O, Bellgardt M, Ney H (2014) Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-Weather’, InLREC, pp 1911–1916. http://www.lrec-2conf.org/proceedings/lrec2014/pdf/585_Paper.pdf

  • Gao W, Fang G, Zhao D, Chen Y (2004) A Chinese sign language recognition system based on SOFM/SRN/HMM. Pattern Recogn 37(12):2389–2402

    Article  MATH  Google Scholar 

  • Ghahramani Z, Jordan MI (1996) Factorial hidden Markov models. In: Advances in Neural Information Processing Systems, pp 472–478. http://papers.nips.cc/paper/1144-factorial-hidden-markov-models.pdf

  • Gianni F, Collet C, Dalle P (2007) Robust tracking for processing of videos of communication’s gestures. Springer International Gesture Workshop, pp 93–101

  • Górecki T, Łuczak M (2015) Multivariate time series classification with parametric derivative dynamic time warping. Expert Syst Appl 42(5):2305–2312

    Article  Google Scholar 

  • Grobel K, Assan M (1997) Isolated sign language recognition using hidden Markov models. IEEE Int Conf Syst Man Cybern 1:162–167

    Google Scholar 

  • Grzeszcuk R, Bradski G, Chu MH, Bouguet JY (2000) Stereo based gesture recognition invariant to 3d pose and lighting. Proc IEEE Conf Comput Vision Pattern Recogn 1:826–833

    Google Scholar 

  • Gupta L, Ma S (2001) ‘Gesture-based interaction and communication: automated classification of hand gesture contours. IEEE Trans Syst Man Cybern Part C (Appl Rev) 31(1):114–120

    Article  Google Scholar 

  • Gupta N, Mittal P, Roy SD, Chaudhury S, Banerjee S (2002) Developing a gesture-based interface. IETE J Res 48(3–4):237–244

    Article  Google Scholar 

  • Habili N, Lim CC, Moini A (2004) Segmentation of the face and hands in sign language video sequences using color and motion cues. IEEE Trans Circuits Syst Video Technol 14(8):1086–1097

    Article  Google Scholar 

  • Hadfield SJ, Bowden R (2010) Generalised pose estimation using depth. In: Proceedings of the European Conference on Computer Vision (Workshops). https://doi.org/10.1007/978-3-642-35749-7_24

  • Hamada Y, Shimada N, Shirai Y (2004) Hand shape estimation under complex backgrounds for sign language recognition. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp 589–594

  • Han J, Awad G, Sutherland A (2009) Automatic skin segmentation and tracking in sign language recognition. IET Comput Vision 3(1):24–35

    Article  Google Scholar 

  • Han J, Awad G, Sutherland A (2013) Boosted subunits: a framework for recognising sign language from videos. IET Image Proc 7(1):70–80

    Article  Google Scholar 

  • Hanke T (2004) HamNoSys-representing sign language data in language resources and language processing contexts. LREC, vol 4. https://d1wqtxts1xzle7.cloudfront.net/45535382/HamNoSysRepresenting_sign_language_data_20160511-27828-nq6576.pdf?1462962410=&response-contentdisposition=inline%3B+filenameDHamNoSys_Representing_sign_language_data.pdf&Expires=1597123446&Signature=IAYxJflwN3ppfL0wArgrLoUrFhw9dqNN2c1oFRwKBGMOArqHltn66eqNjSSybBkxu5RcnDaKdm7JeK~hHXI230TyQPBLLZs7Zix1lZ1rtZxGLRXwN~vc3a7s5Te9rlcxH8vAbXGyPIzZGbfHCU5dv1wjxMugE83ZIGijoeV2yj81CRlz9QkkHZNFvdQs5i67a1srqcA9CVx4VluKNp3UJJfIHbWNwh94KQf07q5aLN73BeJ7BmA7WvFuvyNieqCFNmcSlsq5MWvREoTV8kOrl6ilnDJbv5WIhdHXCkGyY~dfrfQL~6ARSIuQ42mkWXWC3StpvgPPE1U2FDe3SGRXQ__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA

  • Holden EJ, Owens R (2010) Visual sign language recognition. Springer Multi-Image Analysis, pp 270–287. https://doi.org/10.1007/3-540-45134-X_20

  • Holden EJ, Lee G, Owens R (2005) Australian sign language recognition. Mach Vis Appl 16(5):312

    Article  Google Scholar 

  • Hongo H, Ohya M, Yasumoto M, Niwa Y, Yamamoto, K (2000) Focus of attention for face and hand gesture recognition using multiple cameras. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp 156–161. https://doi.org/10.1109/AFGR.2000.840627

  • Hopkins J (2008) Choosing how to write sign language: a sociolinguistic perspective. Int J Sociol Lang 192:75–89

    Google Scholar 

  • Hoshino K (2006) Dexterous robot hand control with data glove by human imitation. IEICE Trans Inf Syst 89(6):1820–1825

    Article  Google Scholar 

  • Huang S, Hong J (2011) Moving object tracking system based on camshift and Kalman filter. In: International Conference on Consumer Electronics, Communications and Networks (CECNet), pp 1423–1426. https://doi.org/10.1109/CECNET.2011.5769081

  • Huang CL, Huang WY (1998) Sign language recognition using model-based tracking and a 3D Hopfield neural network. Mach Vis Appl 10(5):292–307

    Article  Google Scholar 

  • Huang CL, Jeng SH (2001) A model-based hand gesture recognition system. Mach Vis Appl 12(5):243–258

    Article  Google Scholar 

  • Ibrahim NB, Selim MM, Zayed HH (2012) A dynamic skin detector based on face skin tone color. In: IEEE 8th International Conference on Informatics and Systems (INFOS), pp 1–5

  • Ibrahim NB, Selim MM, Zayed HH (2017) An automatic Arabic sign language recognition system (ArSLRS). J King Saud Univ Comput Inf Sci

  • Jiang F, Gao W, Yao H, Zhao D, Chen X (2009) Synthetic data generation technique in Signer-independent sign language recognition. Pattern Recogn Lett 30(5):513–524

    Article  Google Scholar 

  • Jian-zheng L, Zheng Z (2011) Head movement recognition based on LK algorithm and Gentleboost. In: IEEE 7th International Conference on in networked computing and advanced information management (NCM), pp 232–236. https://ieeexplore.ieee.org/abstract/document/5967551

  • Jurafsky D, Martin JH (2014) Speech and language processing. Pearson

  • Just A, Bernier O, Marcel S (2004) HMM and IOHMM for the recognition of mono-and bi-manual 3D hand gestures. Institut Dalle Molle d’Intelligence Artificielle Perceptive (IDIAP). https://infoscience.epfl.ch/record/83136

  • Kadir T, Bowden R, Ong EJ, Zisserman A (2004) Minimal training, large lexicon, unconstrained sign language recognition. In: British Machine Vision Conference, pp 1–10. http://personal.ee.surrey.ac.uk/Personal/R.Bowden/publications/bmvc04/kadirBowden_BMVC04.pdf

  • Kanaujia A, Huang Y, Metaxas D (2006) Tracking facial features using mixture of point distribution models. In: Computer vision, graphics and image processing, pp 492–503. https://doi.org/10.1007/11949619_44

  • Kapuscinski T, Oszust M, Wysocki M, Warchol D (2015) Recognition of hand gestures observed by depth cameras. Int J Adv Robot Syst 12(4):36

  • Karami A, Zanj B, Sarkaleh AK (2011) Persian sign language (PSL) recognition using wavelet transform and neural networks. Expert Syst Appl 38(3):2661–2667

    Article  Google Scholar 

  • Kawulok M (2008) Dynamic skin detection in color images for sign language recognition. Image Signal Process:112–119. https://doi.org/10.1007/978-3-540-69905-7_13

  • Kelly D, Delannoy JR, McDonald J, Markham C (2009) Automatic recognition of head movement gestures in sign language sentences. In: Proceedings of the 4th China-Ireland Information and Communications Technologies Conference, pp 142–145. http://mural.maynoothuniversity.ie/2548/

  • Kelly D, McDonald J, Markham C (2011) Recognition of spatiotemporal gestures in sign language using gesture threshold hmms. Mach Learn Vision Based Motion Anal:307–348. https://doi.org/10.1007/978-0-85729-057-1_12

  • Keskin C, Akarun L (2009) STARS: Sign tracking and recognition system using input–output HMMs. Pattern Recogn Lett 30(12):1086–1095

    Article  Google Scholar 

  • Keskin C, Kıraç F, Kara YE, Akarun L (2012) Hand pose estimation and hand shape classification using multi-layered randomized decision forests. Springer European Conference on Computer Vision, pp 852–863. https://doi.org/10.1007/978-3-642-33783-3_61

  • Khademi M, Mousavi Hondori H, McKenzie A, Dodakian L, Lopes CV, Cramer SC (2014) ‘Free-hand interaction with leap motion controller for stroke rehabilitation. In: Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems, pp 1663–1668. https://doi.org/10.1145/2559206.2581203

  • Kim J-S, Jang W, Bien Z (1996) ‘A dynamic gesture recognition system for the Korean sign language (KSL)’, IEEE Transactions on Systems, Man, and Cybernetics. Part B (Cybernetics) 26(2):354–359

    Article  Google Scholar 

  • Kim T, KeaneJ, Wang W, Tang H, Riggle J, Shakhnarovich G, Brentari D, Livescu K (2017) Lexicon-free fingerspelling recognition from video: data, models, and signer adaptation. Computer Speech & Language. https://doi.org/10.1016/j.csl.2017.05.009

  • Kirac F, Kara YE, Akarun L (2014) Hierarchically constrained 3D hand pose estimation using regression forests from single frame depth data. Pattern Recogn Lett 50:91–100

    Article  Google Scholar 

  • Kriegel HP, Schubert E, Zimek A (2017) The (black) art of runtime evaluation: are we comparing algorithms or implementations? Knowl Inf Syst 52(2):341–378

    Article  Google Scholar 

  • Koller O, Ney H, Bowden R (2013) May the force be with you: force-aligned signwriting for automatic subunit annotation of corpora. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp 1–6. https://doi.org/10.1109/FG.2013.6553777

  • Koller O, Forster J, Ney H (2015) Continuous sign language recognition: towards large vocabulary statistical recognition systems handling multiple signers. Comput Vis Image Underst 141:108–125

    Article  Google Scholar 

  • Koller O, Bowden R, Ney H (2016) Automatic alignment of HamNoSys subunits for continuous sign language recognition. In: Proceedings of the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining , Portorož , Slovenia , LREC, pp 121–128. http://epubs.surrey.ac.uk/812762/

  • Kong WW, Ranganath S (2014) Towards subject independent continuous sign language recognition: a segment and merge approach. Pattern Recogn 47(3):1294–1308

    Article  Google Scholar 

  • Kumar P, Gauba H, Roy PP, Dogra DP (2017) A multimodal framework for sensor based sign language recognition. Neurocomputing 259:21–38

    Article  Google Scholar 

  • Kumar P, Roy PP, Dogra DP (2018) Independent Bayesian classifier combination based sign language recognition using facial expression. Inf Sci 428:30–48

    Article  MathSciNet  Google Scholar 

  • Kurdyumov R, Ho P, Ng J (2011) Sign language classification using webcam images, pp 1–4. http://cs229.stanford.edu/proj2011/KurdyumovHoNg-SignLanguageClassificationUsingWebcamImages.pdf

  • Lai K, Konrad J, Ishwar P (2012) A gesture-driven computer interface using Kinect. In: IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), pp 185–188. https://doi.org/10.1109/SSIAI.2012.6202484

  • Lang S, Block M, Rojas R (2012) Sign language recognition using kinect. Artif Intell Soft Comput:394–402. https://doi.org/10.1007/978-3-642-29347-4_46

  • Li H, Zhang K, Jiang T (2004) Minimum entrophy clustering and applications to gene expression analysis. In: 3rd IEEE Computational Systems Bioinformatics Conference, pp 142–151. https://doi.org/10.1109/CSB.2004.1332427

  • Li K, Zhou Z, Lee CH (2016) Sign transition modeling and a scalable solution to continuous sign language recognition for real-world applications. ACM Trans Access Comput 8(2):7–23

    Article  Google Scholar 

  • Li YB, Shen XL, Bei SS (2011) Real-time tracking method for moving target based on an improved Camshift algorithm. In: International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), pp 978–981. https://doi.org/10.1109/MEC.2011.6025628

  • Lichtenauer JF, Hendriks EA, Reinders MJ (2008) Sign language recognition by combining statistical DTW and independent classification. IEEE Trans Pattern Anal Mach Intell 30(11):2040–2046

    Article  Google Scholar 

  • Liddell SK, Johnson RE (1989) American sign language: the phonological base. Sign Lang Stud 64(1):195–277

    Article  Google Scholar 

  • Liu N, Lovell BC (2003) Gesture classification using hidden markov models and viterbi path counting. In: Seventh Digital image computing: techniques and applications, pp 273–282

  • Marcel S, Bernier O, Viallet JE, Collobert D (2000) Hand gesture recognition using input-output hidden markov models. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp 456–461. https://doi.org/10.1109/AFGR.2000.840674

  • Marin G, Dominio F, Zanuttigh P (2014) Hand gesture recognition with leap motion and kinect devices. In: IEEE International Conference on Image Processing (ICIP), pp 1565–1569. https://doi.org/10.1109/ICIP.2014.7025313

  • Marin G, Dominio F, Zanuttigh P (2016) Hand gesture recognition with jointly calibrated Leap Motion and depth sensor. Multimedia Tools Appl 75(22):14991–15015

    Article  Google Scholar 

  • Mehrotra K, Godbole A, Belhe S (2015) Indian sign language recognition using kinect sensor. In: Springer International Conference Image Analysis and Recognition, pp 528–535

  • Metaxas DN, Liu B, Yang F, Yang P, Michael N, Neidle C (2012) Recognition of nonmanual markers in american sign language (ASL) using non-parametric adaptive 2D-3D face tracking. In: Proceedings of Language Resources and Evaluation Conference, pp 2414–2420

  • Mohandes M, A-Buraiky S, Halawani T, Al-Baiyat S (2004) Automation of the Arabic sign language recognition. In: Proceedings of the IEEE International conference on in information and communication technologies: from theory to applications, pp 479–480. https://doi.org/10.1109/ICTTA.2004.1307840

  • Mohandes M, Aliyu S, Deriche M (2014) Arabic sign language recognition using the leap motion controller. In: IEEE 23rd International Symposium on Industrial Electronics (ISIE), pp 960–965. https://doi.org/10.1109/ISIE.2014.6864742

  • Mohandes M, Deriche M, Johar U, Ilyas S (2012) A signer-independent Arabic Sign Language recognition system using face detection, geometric features, and a Hidden Markov Model. Comput Electr Eng 38(2):422–433

    Article  Google Scholar 

  • Molchanov P, Gupta S, Kim K, Kautz J (2015) Hand gesture recognition with 3D convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 1–7. https://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W15/html/Molchanov_Hand_Gesture_Recognition_2015_CVPR_paper.html

  • Murakami K, Taguchi H (1991) Gesture recognition using recurrent neural networks. In: Proceedings of the ACM SIGCHI conference on Human factors in computing systems, pp 237–242. https://dl.acm.org/doi/pdf/10.1145/108844.108900

  • Murphy-Chutorian E, Trivedi MM (2009) Head pose estimation in computer vision: A survey. IEEE Trans Pattern Anal Mach Intell 31(4):607–626

    Article  Google Scholar 

  • Nandy A, Prasad JS, Mondal S, Chakraborty P, Nandi GC (2010) Recognition of isolated indian sign language gesture in real time. In: Information processing and management, pp 102–107. https://doi.org/10.1007/978-3-642-12214-9_18

  • Nam Y, Wohn K (1996) Recognition of space-time hand-gestures using hidden Markov model. In: ACM symposium on Virtual reality software and technology, pp 51–58. https://doi.org/10.1145/3304181.3304193

  • Nam Y, Wohn N, Lee-Kwang H (1999) Modeling and recognition of hand gesture using colored Petri nets. IEEE Trans Syst Man Cybern Part A Syst Humans 29(5):514–521

    Article  Google Scholar 

  • Neidle C, Thangali A, Sclaroff S (2012) Challenges in development of the american sign language lexicon video dataset (ASLLVD) corpus. In: Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.471.2442&rep=rep1&type=pdf

  • Neidle C, Liu J, Liu B, Peng X, Vogler C, Metaxas D (2014) Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL). In: LREC Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel, vol 5. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.475.7590&rep=rep1&type=pdf

  • Nguyen TD, Ranganath S (2008) Tracking facial features under occlusions and recognizing facial expressions in sign language. In: IEEE 8th International onference on Automatic Face & Gesture Recognition, pp 1–7

  • Nolker C, Ritter H (2002) Visual recognition of continuous hand postures. IEEE Trans Neural Netw 13(4):983–994

    Article  Google Scholar 

  • Ong SC, Ranganath S (2004) Deciphering gestures with layered meanings and signer adaptation. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp 559–564. https://doi.org/10.1109/AFGR.2004.1301592

  • Ong SC, Ranganath S (2005) Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Trans Pattern Anal Mach Intell 1(6):873–891

    Article  Google Scholar 

  • Ong EJ, Cooper H, Pugeault N, Bowden R (2012) Sign language recognition using sequential pattern trees. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2200–2207. https://doi.org/10.1109/CVPR.2012.6247928

  • Ong EJ, Koller O, Pugeault N, Bowden R (2014) Sign spotting using hierarchical sequential patterns with temporal intervals. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1923–1930. https://www.cv-foundation.org/openaccess/content_cvpr_2014/html/Ong_Sign_Spotting_using_2014_CVPR_paper.html

  • Ouhyoung M, Liang RH (1996) A sign language recognition system using hidden markov model and context sensitive search. In: Proceedings of ACM Virtual Reality Software and Technology Conference, pp 59–66. https://doi.org/10.1145/3304181.3304194

  • Oz C, Leu MC (2011) American Sign Language word recognition with a sensory glove using artificial neural networks. Eng Appl Artif Intell 24(7):1204–1213

    Article  Google Scholar 

  • Oszust M, Wysocki M (2012) Modelling and recognition of signed expressions using subunits obtained by data-driven approach. In: International conference on artificial intelligence: methodology, systems, and applications, pp 315–324. https://doi.org/10.1007/978-3-642-33185-5_35

  • Oszust and Wysocki M (2013) Polish sign language words recognition with kinect. In: IEEE 6th International Conference on Human System Interaction (HSI), pp 219–226. https://doi.org/10.1109/HSI.2013.6577826

  • Oszust M, Wysocki M (2014) Some approaches to recognition of sign language dynamic expressions with kinect. Human Comput Syst Interact Backgr Appl 3:75–86

    Google Scholar 

  • Palacios JM, Sagüés C, Montijano E, Llorente S (2013) Human-computer interaction based on hand gestures using RGB-D sensors. Sensors 13(9):11842–11860

    Article  Google Scholar 

  • Pattanaworapan K, Chamnongthai K (2012) Finger alphabet recognition for automatic sign language interpretation system. In: 35th Electrical Engineering Conference (EECON35), pp 12–14

  • Pattanaworapan K, Chamnongthai K, Guo JM (2016) Signer-independence finger alphabet recognition using discrete wavelet transform and area level run lengths. J Vis Commun Image Represent 38:658–677

    Article  Google Scholar 

  • Pedersoli F, Benini S, Adami N, Leonardi R (2014) XKin: an open source framework for hand pose and gesture recognition using kinect. Visual Comput 30(10):1107–1122

    Article  Google Scholar 

  • Pitsikalis V, Theodorakis S, Vogler C, Maragos P (2011) Advances in phonetics-based sub-unit modeling for transcription alignment and sign language recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 1–6. https://doi.org/10.1109/CVPRW.2011.5981681

  • Potter LE, Araullo J, Carter L (2013) The leap motion controller: a view on sign language. In: Proceedings of the ACM 25th Australian computer-human interaction conference: augmentation, application, innovation, collaboration, pp 175–178. https://doi.org/10.1145/2541016.2541072

  • Pugeault N, Bowden R (2011) Spelling it out: Real-time asl fingerspelling recognition. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp 1114–1119. https://doi.org/10.1109/ICCVW.2011.6130290

  • Quesada L, López G, Guerrero L (2017) Automatic recognition of the American sign language fingerspelling alphabet to assist people living with speech or hearing impairments. J Ambient Intell Human Comput 8:625–635

    Article  Google Scholar 

  • Raheja JL, Chaudhary A, Singal K (2011) Tracking of fingertips and centers of palm using kinect. In: IEEE third international conference on Computational intelligence, modelling and simulation (CIMSiM), pp 248–252. https://doi.org/10.1109/CIMSim.2011.51

  • Rao DV, Patil S, Babu NA, Muthukumar V (2006) Implementation and evaluation of image processing algorithms on reconfigurable architecture using C-based hardware descriptive languages. Int J Theor Appl Comput Sci 1(1):9–34

    Google Scholar 

  • Reale M, Liu P, Yin L (2011) Using eye gaze, head pose, and facial expression for personalized non-player character interaction. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 13–18. https://doi.org/10.1109/CVPRW.2011.5981691

  • Rekha J, Bhattacharya J, Majumder S (2011a) Hand gesture recognition for sign language: a new hybrid approach. In: International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV), pp 80–86

  • Rekha J, Bhattacharya J, Majumder S (2011b) ‘Shape, texture and local movement hand gesture features for indian sign language recognition. In: IEEE 3rd International Conference on Trendz in Information Sciences and Computing (TISC), pp 30–35

  • Reyes M, Domínguez G, Escalera S (2011) Featureweighting in dynamic timewarping for gesture recognition in depth data. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp 1182–1188. https://doi.org/10.1109/ICCVW.2011.6130384

  • Rezaei A, Vafadoost M, Rezaei S, Daliri A (2008) 3D pose estimation via elliptical fourier descriptors for deformable hand representations. In: IEEE 2nd International Conference on Bioinformatics and Biomedical Engineering, pp 1871–1875. https://doi.org/10.1109/ICBBE.2008.797

  • Riviere J, Guitton P (2004) Real time model based tracking using silhouette features. In: Proceedings of RFIA, Toulouse, France

  • Rossol N, Cheng I, Basu A (2016) A multisensor technique for gesture recognition through intelligent skeletal pose analysis. IEEE Trans Human Mach Syst 46(3):350–359

    Article  Google Scholar 

  • Roussos A, Theodorakis S, Pitsikalis V, Maragos P (2010) Hand tracking and affine shape-appearance handshape sub-units in continuous sign language recognition. ECCV Workshops 1:258–272

    Google Scholar 

  • Rudovic O, Pavlovic V, Pantic M (2012) Multi-output laplacian dynamic ordinal regression for facial expression recognition and intensity estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2634–2641. https://doi.org/10.1109/CVPR.2012.6247983

  • Salmond DJ, Birch H (2001) ‘A particle filter for track-before-detect. Proc IEEE Am Control Conf 5:3755–3760

    Article  Google Scholar 

  • Schmitt D, McCoy N (2011) Object classification and localization using SURF descriptors. CS, vol 229, pp 1–5. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.1456&rep=rep1&type=pdf

  • Selvamani K, Elakkiya R (2017) Human computer interaction. Lambert Academic Publishing Ltd. (ISBN: 978-620-2-00340-7)

  • Shanableh T, Assaleh K & Al-Rousan, M 2017, ‘Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language. IEEE Trans Syst Man Cybern Part B 37(3):641–650

  • Sherrah J, Gong S (2000) Resolving visual uncertainty and occlusion through probabilistic reasoning. In: British Machine Vision Conference, pp 1–10

  • Shukor AZ, Miskon MF, Jamaluddin MH, Bin Ali F, Asyraf MF, Bin Bahar MB (2015) A new data glove approach for Malaysian sign language detection. Procedia Comput Sci 76:60–67

    Article  Google Scholar 

  • Starner T, Weaver J, Pentland A (1998) Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans Pattern Anal Mach Intell 20(12):1371–1375

    Article  Google Scholar 

  • Stenger B (2006) Template-based hand pose recognition using multiple cues. Computer Vision–ACCV, pp 551–560. https://doi.org/10.1007/11612704_55

  • Stokoe WC (1991) Semantic phonology. Sign Lang Stud 71(1):107–114

    Article  Google Scholar 

  • Stokoe William C (2005) Sign language structure: an outline of the visual communication systems of the American deaf. J Deaf Stud Deaf Educ 10(1):3–37

    Article  Google Scholar 

  • Stokoe WC, Casterline DC, Croneberg CG (1976) A dictionary of American Sign Language on linguistic principles. Linstok Press

  • Suarez J, Murphy RR (2012) Hand gesture recognition with depth images: a review. In: IEEE Roman, pp 411–417. https://doi.org/10.1109/ROMAN.2012.6343787

  • Sadek MI, Mikhael MN, Mansour HA (2017) A new approach for designing a smart glove for Arabic Sign Language Recognition system based on the statistical analysis of the Sign Language. In: 34th National Radio Science Conference (NRSC), pp 380–388. https://doi.org/10.1109/NRSC.2017.7893499

  • Sun Q, Liu H, Liu M, Zhang T (2016) Human activity prediction by mapping grouplets to recurrent Self-Organizing Map. Neurocomputing 177:427–440

    Article  Google Scholar 

  • Supalla SJ, McKee C, Cripps JH (2014) An overview on the ASL-phabet. Gloss Institute, Tucson

    Google Scholar 

  • Sutton V (2000) Sign writing. DAC f. S. Writing

  • Sharma S, Gupta R, Kumar A (2020) Trbaggboost: an ensemble-based transfer learning method applied to Indian Sign Language recognition. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-020-01979-z

    Article  Google Scholar 

  • Tanibata N, Shimada N, Shirai Y (2002) Extraction of hand features for recognition of sign language words. In: International conference on vision interface, pp 391–398

  • Thangali A, Nash JP, Sclaroff S, Neidle C (2011) Exploiting phonological constraints for handshape inference in ASL video. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 521–528. https://doi.org/10.1109/CVPR.2011.5995718

  • Tharwat A, Gaber T, Hassanien AE, Shahin MK, Refaat B (2015) Sift-based arabic sign language recognition system. In: Springer Afro-European conference for industrial advancement, pp 359–370. https://doi.org/10.1007/978-3-319-13572-4_30

  • Theodorakis S, Katsamanis A, Maragos P (2009) Product-HMMs for automatic sign language recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp 1601–1604. https://doi.org/10.1109/ICASSP.2009.4959905

  • Theodorakis S, Pitsikalis V, Maragos P (2010) Model-level data-driven sub-units for signs in videos of continuous sign language. In: Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on 2010 Mar 14. IEEE, pp 2262–2265. https://doi.org/10.1109/ICASSP.2010.5495875

  • Theodorakis S, Pitsikalis V, Maragos P (2014) Dynamic–static unsupervised sequentiality, statistical subunits and lexicon for sign language recognition. Image Vis Comput 32(8):533–549

    Article  Google Scholar 

  • Tian YL, Kanade T, Cohn JF (2005) Facial expression analysis. Handbook of face recognition, pp 247–275. https://doi.org/10.1007/0-387-27257-7_12

  • Tubaiz N, Shanableh T, Assaleh K (2015) Glove-based continuous Arabic sign language recognition in user-dependent mode. IEEE Trans Human Mach Syst 45(4):526–533

    Article  Google Scholar 

  • Unanue IJ, Borzeshi EZ, Piccardi M (2017) Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition. J Biomed Inform 76:102–109

    Article  Google Scholar 

  • Vamsikrishna KM, Dogra DP, Desarkar MS (2015) Computer-vision-assisted palm rehabilitation with supervised learning. In: IEEE transactions on biomedical engineering, vol 63, no 5, pp 991–1001. https://doi.org/10.1109/TBME.2015.2480881

  • Valstar MF, Pantic M (2012) Fully automatic recognition of the temporal phases of facial actions. IEEE Trans Syst Man Cybern Part B (Cybernetics) 42(1):28–43

    Article  Google Scholar 

  • Vogler C, Metaxas D (1997) Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods. IEEE Int Conf Syst Man Cybern 1:156–161

    Google Scholar 

  • Vogler C, Metaxas D (1999) Parallel hidden markov models for american sign language recognition. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol 1, pp 116–122. https://doi.org/10.1109/ICCV.1999.791206

  • Vogler C, Metaxas D (2001) A framework for recognizing the simultaneous aspects of American sign language. Comput Vis Image Underst 81(3):358–384

    Article  MATH  Google Scholar 

  • Vogler C, Metaxas D (2004) Handshapes and movements: multiple-channel ASL recognition. Lect Notes Artif Intell 2915:247–258

    Google Scholar 

  • Vogler C, Li Z, Kanaujia A, Goldenstein S, Metaxas D (2007) The best of both worlds: Combining 3d deformable models with active shape models. In: IEEE 11th International Conference on Computer Vision, pp 1–7. https://doi.org/10.1109/ICCV.2007.4408872

  • Von Agris U, Zieren J, Canzler U, Bauer B, Kraiss KF (2008) Recent developments in visual sign language recognition. Univ Access Inf Soc 6(4):323–362

    Article  Google Scholar 

  • Von Agris U, Knorr M, Kraiss KF (2008b) The significance of facial features for automatic sign language recognition. In: 8th IEEE International Conference on Automatic Face & Gesture Recognition, pp 1–6

  • Waldron MB, Kim S (1995) Isolated ASL sign recognition system for deaf persons. IEEE Trans Rehabil Eng 3(3):261–271

    Article  Google Scholar 

  • Wang F, Lin Y (2009) Improving particle filter with a new sampling strategy. In: IEEE 4th International Conference on Computer Science & Education, pp 408–412. https://doi.org/10.1109/ICCSE.2009.5228418

  • Wang C, Chen X, Gao W (2006) Expanding training set for chinese sign language recognition. In: 7th International Conference on Automatic Face and Gesture Recognition, pp 323–328. https://doi.org/10.1109/FGR.2006.39

  • Wang RY, Popović J (2009) Real-time hand-tracking with a color glove. ACM Trans Graph 28(3):63

    Google Scholar 

  • Wen Y, Hu C, Yu G, Wang C (2012) A robust method of detecting hand gestures using depth sensors. In: IEEE International Workshop on haptic audio visual environments and games (HAVE), pp 72–77. https://doi.org/10.1109/HAVE.2012.6374441

  • Wilson AD, Bobick AF (1999) Parametric hidden markov models for gesture recognition. IEEE Trans Pattern Anal Mach Intell 21(9):884–900

    Article  Google Scholar 

  • Wong SF, Cipolla R (2005) Real-time adaptive hand motion recognition using a sparse bayesian classifier. In: ICCV Workshop on Computer Vision in Human-Computer Interaction, pp 170–179. https://doi.org/10.1007/11573425_17

  • Yang HD, Sclaroff S, Lee SW (2009) Sign language spotting with a threshold model based on conditional random fields. IEEE Trans Pattern Anal Mach Intell 31(7):1264–1277

    Article  Google Scholar 

  • Yang HD, Lee SW (2013) Robust sign language recognition by combining manual and non-manual features based on conditional random field and support vector machine. Pattern Recogn Lett 34(16):2051–2056

    Article  Google Scholar 

  • Yang M-H, Ahuja N, Tabb M (2002) Extraction of 2d motion trajectories and its application to hand gesture recognition. IEEE Trans Pattern Anal Mach Intell 24(8):1061–1074

    Article  Google Scholar 

  • Yang R, Sarkar S, Loeding B (2010) Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming. IEEE Trans Pattern Anal Mach Intell 32(3):462–477

    Article  Google Scholar 

  • Yang W, Tao J, Ye Z (2016) Continuous sign language recognition using level building based on fast hidden Markov model. Pattern Recogn Lett 78:28–35

    Article  Google Scholar 

  • Yeasin M, Chaudhuri S (2000) Visual understanding of dynamic hand gestures. Pattern Recogn 33(11):1805–1817

    Article  Google Scholar 

  • Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38(4):13

    Article  Google Scholar 

  • Yin P, Starner T, Hamilton H, Essa I, Rehg JM (2009) Learning the basic units in american sign language using discriminative segmental feature selection. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp 4757–4760. https://doi.org/10.1109/ICASSP.2009.4960694

  • Yu C, Wang X, Huang H, Shen J, Wu K (2010) Vision-based hand gesture recognition using combinational features. In: Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), pp 543–546. https://doi.org/10.1109/IIHMSP.2010.138

  • Yuan Q, Geo W, Yao H, Wang C (2002) Recognition of strong and weak connection models in continuous sign language. In: Proceedings of the 16th IEEE International Conference on InPattern Recognition, vol 1, pp 75–78. https://doi.org/10.1109/ICPR.2002.1044616

  • Zahedi M, Keysers D, Deselaers T, Ney H (2005) Combination of tangent distance and an image distortion model for appearance-based sign language recognition. Lect Notes Comput Sci 3663:401

    Article  Google Scholar 

  • Zahedi M, Dreuw P, Rybach D, Deselaers T, Ney H (2006) Geometric features for improving continuous appearance-based sign language recognition. InBMVC 2006, vol 3, pp 1019–1028

  • Zaki MM, Shaheen SI (2011) Sign language recognition using a combination of new vision based features. Pattern Recogn Lett 32(4):572–577

    Article  Google Scholar 

  • Zhang H, Wang Y, Deng C (2011a) Application of gesture recognition based on simulated annealing bp neural network. IEEE IEEE International Conference on Electronic and Mechanical Engineering and Information Technology (EMEIT), vol 1, pp 178–181

  • Zhang X, Chen X, Li Y, Lantz V, Wang K, Yang J (2011b) A framework for hand gesture recognition based on accelerometer and EMG sensors. IEEE Trans Syst Man Cybern Part A Syst Humans 41(6):1064–1076

    Article  Google Scholar 

  • Zhao X, Zhang S (2011) Facial expression recognition based on local binary patterns and kernel discriminant Isomap. Sensors 11(10):9573–9588

    Article  Google Scholar 

  • Zieren J, Kraiss KF (2005) Robust person-independent visual sign language recognition. In: Pattern recognition and image analysis, pp 333–355. https://doi.org/10.1007/11492429_63

  • Zieren J, Unger N, Akyol S (2002) Hands tracking from frontal view for vision-based gesture recognition. Pattern Recogn, pp 531–539. https://doi.org/10.1007/3-540-45783-6_64

Download references

Acknowledgements

I wish to express my gratitude to Science & Engineering Research Board, Department of Science & Technology, Government of India for sanctioning the project under Start-up Research Grant program SRG/2019/001338 and for supporting the project ostensibly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Elakkiya.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article has been retracted. Please see the retraction notice for more detail: https://doi.org/10.1007/s12652-022-04314-w

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Elakkiya, R. RETRACTED ARTICLE: Machine learning based sign language recognition: a review and its research frontier. J Ambient Intell Human Comput 12, 7205–7224 (2021). https://doi.org/10.1007/s12652-020-02396-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-020-02396-y

Keywords

Navigation