BIt On

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

Click icon to add picture

Sign Language Recognition


Submitted By
Sr. No. Name Of Roll No.

1 Harshvardhan Hanmantrao 1218


Deskmukh
2 Deepak Tatyaso Patil 1215
3 Darshan Dattatray Gaikwad 1357
4 Mahesh Suresh Chaudhari 1728
Contents:

• Introduction
• Literature Review:
• Relevance of the Work:
• Proposed Work:
• Proposed Methodology:
• HW/SW Requirement:
• Architecture/Flowchart/Algorithm:
• References
Introduction to problem statement

Sign Language is the most expressive way for communication between hearing impaired people, where
information is majorly conveyed through the hand/arm gestures. SLR plays an important predominant role in
developing the gesture-based human–computer interaction system. The recent years of statistics have witnessed an
increased research interest in the interaction and intelligent computing. In the present-day framework, computers
have become a key element of our society for interaction. As the hand signs constitutes a powerful interaction
human communication modality, they can be considered as an intuitive and convenient mode for the
communication between normal human and human with hearing and speaking problems. That’s why we have
decided to break the communication barrier between deaf and normal people.

The signs are majorly divided into two classes: Static signs and Dynamic signs. The dynamic signs often
include movement of body parts. It may also include emotions depending on the meaning that gesture conveys.
Literature Review

•Sign Language Recognition (SLR) system, which is required to recognize sign languages, has been widely studied for
years. The studies are based on various input sensors, gesture segmentation, extraction of features and classification
methods. This paper aims to analyse and compare the methods employed in the SLR systems, classifications methods
that have been used, and suggests the most promising method for future research. Due to recent advancement in
classification methods, many of the recent proposed works mainly contribute on the classification methods, such as
hybrid method and Deep Learning. This paper focuses on the classification methods used in prior Sign Language
Recognition system. Based on our review, HMM based approaches have been explored extensively in prior research,
including its modifications. This study is based on various input sensors, gesture segmentation, extraction of features and
classification methods. This paper aims to analyse and compare the methods employed in the SLR systems,
classifications methods that have been used, and suggests the most reliable method for future research. Due to recent
advancement in classification methods, many of the recently proposed works mainly contribute to the classification
methods, such as hybrid method and Deep Learning. Based on our review, HMM-based approaches have been explored
extensively in prior research, including its modifications. Hybrid CNN-HMM and fully Deep Learning approaches have
shown promising results and offer opportunities for further exploration.


Relavance of the work

1. Empowering Deaf and Hard of Hearing Individuals

One of the most critical reasons for the relevance of sign language recognition is its potential to
empower Deaf and Hard of Hearing individuals. Access to effective communication is a fundamental
human right, and sign language is the primary mode of communication for many in this community. Sign
language recognition technology bridges the communication gap between the Deaf and Hard of Hearing
and the hearing world. It allows them to interact with others, access education, job opportunities,
healthcare, and various services more easily.

2. Inclusive Education

In the realm of education, sign language recognition plays a pivotal role in ensuring inclusive learning
environments. Deaf and Hard of Hearing students can face significant barriers to education when their
teachers and peers do not understand sign language. With sign language recognition systems, these
students can participate fully in classrooms, enhancing their learning experience and opportunities for
academic success.
Relavance of the work

1. Enhanced Employment Opportunities

Sign language recognition also has the potential to open doors to better employment opportunities for
Deaf and Hard of Hearing individuals. Many jobs require effective communication skills, and without
access to sign language recognition, these individuals might face limitations. By using this technology,
they can compete on an equal footing in the job market, contributing their skills and talents to various
industries.

2. Bridging the Communication Gap

Communication is essential for everyone, and sign language recognition technology not only benefits the
Deaf and Hard of Hearing community but also society at large. It helps bridge the communication gap
between individuals with different communication needs. This inclusivity fosters diversity and
understanding, creating a more equitable and compassionate society.
Proposed Methodology

 Creating the dataset for sign language detection:

It is fairly possible to get the dataset we need on the internet but in this project, we will be creating the
dataset on our own. We will be having a live feed from the video cam and every frame that detects a hand in the
ROI (region of interest) created will be saved in a directory (here gesture directory) that contains two folders train
and test, each containing 10 folders containing images captured using the create_gesture_data.py

 Training CNN:

First, we load the data using ImageDataGenerator of keras through which we can use the
flow_from_directory function to load the train and test set data, and each of the names of the number folders
will be the class names for the images loaded.

 Predict the gesture:

In this, we create a bounding box for detecting the ROI and calculate the accumulated_avg as we did in
creating the dataset. This is done for identifying any foreground object. Gesture is detected so the threshold
of the ROI is treated as a test image.
HW/SW Requirement
 Software Requirements:
•Operating System: Windows, Mac, Linux

•SDK: OpenCV, TensorFlow, Keras, Numpy

•Language: Python

 Hardware Requirements:
•The Hardware Interfaces Required are:

•Camera: Good quality, 3MP

•Ram: Minimum 8 GB or higher

•GPU: 4 GB

•Dedicated Processor: Intel Pentium 4 or higher

•HDD: 256 GB or higher Monitor: 15 inch or 17 inch colour monitor

•Mouse: Scroll or Optical Mouse or Touch Pad

•Keyboard: Standard 110 keys keyboard


Block Diagram

Sign Detection Sign Processed


Input Frame
Localization Frame

Sign Confidence of
Proposed recognition

Image Sign
Dataset Recognition

9
Illustrative Prototype
References

•[1] Brill R. 1986. The Conference of Educational Administrators Serving the Deaf: A History. Washington,
DC: Gallaudet University Press.

•[2] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition,"
in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.

• [3] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, "MobileNetV2: Inverted Residuals and
Linear Bottlenecks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp.
4510-4520, doi: 10.1109/CVPR.2018.00474.

• https://data-flair.training/blogs/sign-language-recognition-python-ml-opencv/

•https://data-flair.training/blogs/sign-language-recognition-python-ml-opencv/
!

You might also like