A novel approach to recognising sign language using deep learning techniques
International Journal of Development Research
A novel approach to recognising sign language using deep learning techniques
Received 18th January, 2023; Received in revised form 09th February, 2023; Accepted 21st February, 2023; Published online 30th March, 2023
Copyright©2023, B. S. Panda et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A new technology has been developed to assist communication between sign language users and non-sign language speakers, specifically for Indian Sign Language (ISL). This technology uses computer vision and deep learning techniques to translate ISL into text. The system captures signs using a camera and maps each sign to its meaning through trained data before converting it into text using TensorFlow. OpenCV and MediaPipe are used to extract keypoints from real-time frames for specific sign language actions, which are then preprocessed by normalizing to a mean of zero, extracting relevant features using MediaPipe, and encoding labels with one-hot encoding. Keypoints from different body parts are concatenated into a single feature vector containing position and visibility information. An LSTM model is created and trained on the preprocessed data using TensorFlow, with the Adam optimizer and categorical cross-entropy loss. The model's hyperparameters are tuned, and regularizes are used to enhance its performance. This system has demonstrated high accuracy in recognizing ISL and converting it into text, which has the potential to be used in various industries, including healthcare, gaming, and education.