Si-Lang Translator With Image Processing
Si-Lang Translator With Image Processing
Si-Lang Translator With Image Processing
ISSN No:-2456-2165
Abstract:- People having hearing and speaking disabilities Sign Language is the means of visual communication,
will have problems communicating with other people. where they use expressions, hand gestures, and body
This creates a gap between them. To avoid this problem, movements as the means of communication. Sign language is
they use some special gestures to express their thoughts. significant for people who suffer from difficulty with hearing
These gestures have different meanings. They are defined or speech. Sign Language Recognition refers to the
as “Sign Language”. Sign language is very important for transformation of words or alphabets into gestures into
deaf and mute people because they are the primary means normally spoken languages of their own locality. Thus, the
of communication between both normal people and transformation of sign language into words or alphabets can
between themselves. It is most commonly used for people help to overcome the gap existing between impaired people
with talking and hearing disorders to communicate. In and the rest of people around the world.
this application, we present a simple structure for sign
language recognition. Our system involves implementing A. PROBLEM DOMAIN
such an application that detects predefined signs through Existing systems deal with many problems. ASL
hand gestures. For the detection of gestures, we use a alphabet recognition is a challenging task due to the
basic level of hardware components like a camera, and difficulties in hand segmentation and the appearance of the
interfacing is needed. Our system would be a variations among signers. The color-based systems also suffer
comprehensive User-friendly Based application built on from many challenges such as complex background, hand
Convolutional Neural Networks. The hand gestures are segmentation, large inter-class and intra-class variations.
recognized by three main steps. First, the dataset is These all mechanisms have a practical limitation because it is
created by capturing images and these images are necessary to use costly extra hardware for getting data for
preprocessed by resizing, masking, and converting RGB sign recognition.
into grayscale images. Secondly, after creating the dataset,
we have to train the system using the Convolutional The existing dynamic sign language recognition
Neural Network, and using the trained classifier model methods still have some drawbacks with difficulties of
the given sign is recognized. Thus, the recognized sign is recognizing complex hand gestures, low recognition accuracy
displayed. We expect that the overall method of this for most dynamic sign language recognition, and potential
application may attract technophiles with an extensive problems in larger video sequence data training. The static
introduction in the sector of automated gesture and sign sign language recognition is hard to deal with the complexity
language recognition, and may help in future works in and large variations of vocabulary set in hand actions. So, it
these fields. may make a misunderstanding of some significant variations
from signers. Dynamic sign language recognition also has
Keywords:- Convolutional Neural Network ; Preprocessing ; challenges in dealing with the complexity of the sign
Sign Language ; ReLU. activities of finger motion from the large-scale body
background. Another difficulty facing is the extracting of the
I. INTRODUCTION most discriminating features from images or videos. Besides,
how to choose an appropriate classifier is also a critical factor
Around 20-45% of people all over the world suffer with for producing accurate recognition results.
hearing and speaking disabilities, and about eight per 20,000
of the global population become deaf and dump before Over a period of time, in the marketplace, there are a
learning any of the language. This creates them to start their variety of products that are capable of converting the signs to
own countries’ sign language as their foremost means of text. For example, a wearable glove is able to translate the
communication. According to topical data of the World Indian Sign Language, but the device would need to
Federation of the Deaf, there are over 70 million sign recognize both the static and dynamic gestures in ISL. While
language users in the world, with over 300 sign languages standard neural networks can classify static gestures, they
across the globe. Contrary to common thoughts, not all cannot be used for classifying dynamic gestures. In dynamic
people who speak with the help of sign languages are able to gestures, the reading at each time point is dependent on the
read and understand the normal texts as non-disabled people previous readings resulting in sequential data. Since standard
who use them, which is due to the differences in sentence neural networks require that each reading be independent of
syntax and grammar. the other readings they cannot be used for the classification of
sequential data. Under the exact context of symbolic
Fig. 2. Data created for the sign “food” Fig. 3. Recognition of the sign “food”