Abstract:
Deaf people mainly use sign language to exchange information with others. It is a language where people use hand gestures to communicate when they cannot speak or hear. S...Show MoreMetadata
Abstract:
Deaf people mainly use sign language to exchange information with others. It is a language where people use hand gestures to communicate when they cannot speak or hear. Sign language recognition (SLR) handles hand gesture recognition and continues when text or speech is generated for the corresponding hand gestures. Hand gestures for sign language are classified as static and dynamic. However, static hand gesture recognition is simpler than dynamic. Both of these identifying features are important to the human community. We can use deep learning to recognize hand gestures by building a neural network where the model will learn to recognize them in an epoch. When the model successfully recognizes, the corresponding English text will be generated and converted to speech. This model will be more efficient for the deaf and help them to hear more easily. This paper proposes a system that uses a MediaPipe model to recognize languages based on cues and actions. The system uses a long short-term memory (LSTM) network, an advanced model in identification due to its fast processing speed and high accuracy. The system achieves accuracy up to 97%. The results show that the system is feasible when applied in practice.
Date of Conference: 23-25 December 2023
Date Added to IEEE Xplore: 22 March 2024
ISBN Information: