SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS

Authors

  • Dr.P.Veeresh, Gowlla Ramireddy, Kunigiri Hemanth, K.Sathyanarayana, Golla Manohar

Keywords:

Sign Language Recognition (SLR) aims to translate sign language into written or spoken language, in order to enhance communication between those who are deaf or mute and those who are not.

Abstract

Sign Language Recognition (SLR) aims to translate sign language into written or spoken language, in order to enhance communication between those who are deaf or mute and those who are not. This work has significant societal implications, but remains very demanding owing to the intricate nature and extensive range of hand movements. Current approaches for SLR use manually designed characteristics to characterize sign language movement and construct classification models using these features. Designing dependable features that can accommodate the wide range of hand movements is a challenge. In order to address this issue, we suggest the use of a groundbreaking convolutional neural network (CNN) that can automatically extract distinctive spatial-temporal characteristics from unprocessed video streams, without the need for any previous information or the creation of specific features. In order to enhance performance, the CNN utilizes several video streams, including color information, depth clues, and body joint locations, as input. This allows for the integration of color, depth, and trajectory information. We assess the suggested model using an actual dataset obtained from Microsoft Kinect and showcase its superior performance compared to conventional methods that rely on manually designed features.

Downloads

Published

.

Issue

Section

Articles