Hand Gestures Classification and Image Processing Using Convolution Neural Network Algorithm
Hand Gestures Classification and Image Processing Using Convolution Neural Network Algorithm
Hand Gestures Classification and Image Processing Using Convolution Neural Network Algorithm
ISSN No:-2456-2165
Abstract:- The deaf community communicates primarily show how affordable this work is, the experiments are
through the use of sign language. In general, sign run on a basic CPU machine as opposed to cutting-edge
language is much more figuratively formable for GPU hardware. The proposed system outperformed
communication, which helps to advance and broaden the several cutting-edge techniques with a recognition
conversation. The ASL is regarded as the universal sign accuracy of 99.82%.
language, although there are numerous variations and
other sign systems used in various parts of the world. Keywords:- Sign language, ASL(American Sign Language)
There are fewer major ideas and concepts assigned. , Deaf Community , Gestures , Human Computer Interaction
There are fewer principal ideas and assigned , Hand Gesture Recognition, CNN( Convolution Neuron
appearances in sign language. The main goal of this Network), SVM(Support Vector Machine)
effort is to create a system of sign language that will
benefit the deaf community and speed up the process of I. INTRODUCTION
communication. The project's main objective is to build
a classifier-based software model for sign language As the standard computer input devices that have been
recognition. The strategy for this is to identify the developed in the realm of technology have not altered much,
gestures and use classifiers to assess the attributes. this is due to the fact that these gadgets work well. As of
Principal component analysis is used for gesture today, as computers become more prevalent in our daily
recognition, and a classifier is used to assess the gesture lives, it is becoming simpler and easier to introduce new
features. The hand gesture has been used as a form of hardware and software. We can only use keyboards, light
communication since the beginning of time. Recognition pens, and these days, keypads, to interact with computers.
of hand gestures makes human-computer interaction Although they are very popular, these gadgets have speed
(HCI) more versatile and convenient. Because of this, limitations. Vision-based interfaces are becoming more
accurate character identification is crucial for a tranquil common among users as technology advances, giving
and error-free HCI. The majority of the hand gesture computers the ability to see. As a result, this evolution will
recognition (HGR) systems now in use have only taken a prompt the creation of new device interfaces, which will
few straightforward discriminating motions into account enable these devices to process commands that cannot be
for recognition performance, according to a literature entered into their present input mechanisms. Man-machine
review. This study uses robust modelling of static signs interaction, often known as human-computer interaction, is
in the context of sign language recognition by using the term used to describe the relationship or interaction
convolutional neural networks (CNNs) based on deep between humans and machines. This is a reference to how
learning. In this study, CNN is used for HGR, which people and machines interact. When creating the HCI
takes into account both the ASL alphabet and numbers model, it is important to keep in mind the two key traits of
simultaneously. The CNNs utilised for HGR are functionality and usability.
emphasized, along with their benefits and drawbacks.
Modified Alex Net and modified VGG16 models for Usability of the system is its ability to do the specific
classification form the foundation of the CNN activities that the user accomplishes with accuracy, whereas
architecture. After feature extraction, a multiclass functionality of the system is the services that are supplied
support vector machine (SVM) classifier is built, which to the user of the system by the system. The use of gestures
is based on modified pre-trained VGG16 and Alex Net for inter-human communication, including sign language, is
architectures. To achieve the highest recognition also utilised for inter-human communication. Recognition of
performance, the results are assessed using various layer hand signals has recently attracted a lot of attention.
features. Both the leave-one-subject-out and a random Applications for hand gesture detection include operating
70-30 method of cross-validation were used to test the machines and removing the mouse from video games. The
accuracy of the HGR schemes. This work also sets of motions that make up sign language are its most
emphasises how easily each character can be recognised important structures. Every gesture has a distinct meaning in
and how similar their motions are to one another. To sign language.
The objective of sign language recognition (SLR) is to Fig 2 Features of Python Language
create sophisticated machine learning algorithms that
reliably categorise human articulations into individual signs Machine Learning
or continuous phrases. Currently, the absence of large Basic and advanced machine learning principles are
annotated datasets restricts the accuracy and generalisation covered in the machine learning lesson. Both students and
capabilities of SLR algorithms, as well as the difficulty in professionals in the workforce can benefit from our machine
recognising sign boundaries in continuous SLR scenarios. learning training from fig[3].
Functional Specifications:
[1]. https://peda.net/id/08f8c4a8511
[2]. K. Bantupalli and Y. Xie, "American Sign
Language Recognition using Deep Learning and
Computer Vision," 2018 IEEE International
Conference on Big Data (Big Data), Seattle, WA,
USA, 2018, pp. 4896-4899, doi: 10.1109/
BigData.2018.8622141.
[3]. CABRERA, MARIA & BOGADO, JUAN & FermÃ-
n, Leonardo & Acuña, Raul & RALEV, DIMITAR.
(2012). GLOVE-BASED GESTURE RECOGNI-
TION SYSTEM. 10.1142/9789814415958_0095.
[4]. He, Siming. (2019). Research of a Sign Language
Translation System Based on Deep Learning. 392-
396. 10.1109/AIAM48774.2019.00083.
[5]. International Conference on Trendz in Information
Sciences and Computing (TISC). : 30-35, 2012.
[6]. Herath, H.C.M. & W.A.L.V.Kumari, & Senevirathne,
W.A.P.B & Dissanayake, Maheshi. (2013). IMAGE
BASED SIGN LANGUAGE RECOGNITION
SYSTEM FOR SINHALA SIGN LANGUAGE
[7]. M. Geetha and U. C. Manjusha, , A Vision Based
Recognition of Indian Sign Language Alphabets and
Numerals Using B-Spline Approximation, Inter-
national Journal on Computer Science and
Engineering (IJCSE), vol. 4, no. 3, pp. 406-415.
2012.
[8]. Pigou L., Dieleman S., Kindermans PJ., Schrauwen
B. (2015) Sign Language Recognition Using
Convolutional Neural Networks. In: Agapito L.,
Bronstein M., Rother C. (eds) Computer Vision –
ECCV 9.2014 Workshops. ECCV 2014. Lecture
Notes in Computer Science, vol 8925. Springer,
Fig 5 System Architecture Cham. https://doi.org/10.1007/978-3-319-16178-
5_40
Future Scope: [9]. Escalera, S., Baró, X., Gonzà lez, J., Bautista, M.,
In order to expand the model's recognition of Madadi, M., Reyes, M., . . . Guyon, I. (2014).
alphabetical features while maintaining high accuracy, we ChaLearn Looking at People Challenge 2014: Dataset
intend to use more alphabets in our datasets. In order to help and Results. Workshop at the European Conference
blind people, we would also like to improve the system by on Computer Vision (pp. 459-473). Springer, . Cham.
incorporating speech recognition. In order to communicate, [10]. Huang, J., Zhou, W., & Li, H. (2015). Sign Language
more than 70 million deaf people worldwide use sign Recognition using 3D convolutional neural networks.
language. They can learn jobs, access resources, and IEEE International Conference on Multimedia and
participate in their communities by using sign language. Expo (ICME) (pp. 1-6). Turin: IEEE.
VII. CONCLUSION