A Review of Factors That Impact The Design of A Glove Based Wearable Devices

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 12, No. 2, June 2023, pp. 522~531


ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i2.pp522-531  522

A review of factors that impact the design of a glove based


wearable devices

Soly Mathew Biju, Obada Al Khatib, Hashir Zahid Sheikh


Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, UOWD Building, Dubai, UAE

Article Info ABSTRACT


Article history: Loss of the capability to talk or hear applies psychological and social effects
on the affected individuals due to the absence of appropriate interaction.
Received Jul 14, 2021 Sign Language is used by such individuals to assist them in communicating
Revised Oct 21, 2022 with each other. The paper aims to report details of various aspects of
Accepted Nov 11, 2022 wearable healthcare technologies designed in recent years based on the aim
of the study, the types of technologies being used, accuracy of the system
designed, data collection and storage methods, technology used to
Keywords: accomplish the task, limitations and future research suggested for the study.
The aim of the study is to compare the differences between the papers. There
Glove is also comparison of technology used to determine which wearable device
Pattern recognition gesture is better, which is also done with the help of accuracy. The limitations and
recognition future research help in determining how the wearable devices can be
Sign language sensor improved. A systematic review was performed based on a search of the
literature. A total of 23 articles were retrieved. The articles are study and
design of various wearable devices, mainly the glove-based device, to help
you learn the sign language.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Soly Mathew Biju
Faculty of Engineering and Information Sciences, University of Wollongong in Dubai
Dubai Blocks 5, 14 & 15, Dubai Knowledge Park, P.O. Box 20183, Dubai, UAE
Email: [email protected]

1. INTRODUCTION
Living in the modern era of computerized world, where everything is straightforward and basic, a
chunk of this world is losing the advantages this age has to offer [1]. The capacity to talk or the potency of
speech is something we always underestimate. It is one of the best and incredible ways of sharing thoughts
and feelings. It facilitates the communication with other individuals. Deafness is characterized as a level of
hearing loss with the end goal that an individual cannot comprehend speech even when the sound is loud. As
indicated by the World Health Organization (WHO), around 466 million individuals have hearing loss, 34
million of those are kids. The WHO additionally assessed that more than 900 million individuals will have
this disability by 2050. There are a few reasons for hearing loss: hereditary problems, certain contagious
illnesses, complications at birth, chronic ear infections, the utilization of specific drugs, exposure to extreme
noise and ageing. This communication boundary unfavorably influences the lives and social connections of
deaf individuals [2].
Human gestures are an effective and amazing method of communication. These are used to express
a persons’ emotions. A sign language is a language that utilizes manual gestures to pass on important
information, instead of using speech. Nonetheless, there are interaction hindrances between hearing
individuals and deaf people because deaf people will be unable to talk and hear or that hearing people will be
unable to use sign to express themselves. This communication disparity can have an adverse effect on the

Journal homepage: http://ijai.iaescore.com


Int J Artif Intell ISSN: 2252-8938  523

lives of the deaf people. Two customary methods of communication between deaf people and hearing people
who do not comprehend gesture-based communication: through mediators or text.
Innovation has diminished this mismatch through procedures that converts gesture-based
communication into speech. These frameworks can be extensively ordered in the sorts of system, that can be
used to change over sign language into speech. These are glove-based framework and vision-based
framework. In glove-based frameworks, an individual sign's while conversing is transferred to the personal
computer (PC). The real time sign is matched with the data set that contains all the signs, that were added at
first to the framework. After comparing with the right sign, the information is passed onto text to sound
converter, where the information is changed over to sound from sign. While the vision-based framework uses
camera to recognize the gestures made by the hand and the body. However, vision-based framework requires
lot of processing on the image such as color segmentation, image filtering and boundary detection.
This study tries to review the sign language gloves by doing a comprehensive assessment and
evaluation based on a comprehensive synthesis of sensor glove papers. We present this thorough review that
highlights the important achievements, sensor positioning, accuracy, limitations, methodology used, time
taken to process and stress the challenges and prospects for this developing area of research.

2. METHOD
This study offers a methodical analysis of literature centered on the smart gloves for mute
individuals. Several evolving ideas have been considered to enhance an insight of advances in the
information on this crucial issue. This study tries to connect this disparity in the literature by doing a
comprehensive assessment and evaluation based on a comprehensive synthesis of sensor glove associated
research published. The following are a synopsis of the key opinions evolving from this literature assessment,
− Sensors used
− Sensor positioning
− Sign language
− Accuracy and efficiency of the system
− Time taken to process
− Limitations
− Methodology used
This paper categorized relevant articles by using an approach called keyword search. Various
keywords were found and explored on IEEE explore. The main aim of this research is to understand the
current research level on sensor gloves that can be used to interpret sign language. After rummaging through
IEEE explore, 23 editorials were found that met the benchmarks for this analysis. All the articles were
comprehensively reviewed by the authors to discover familiar factors. These factors were associated to find
the differences in each article. The key aim of the article was also uncovered when processing these articles.
The results section is divided into 3 sections that are aim of articles, sensors, and feature comparison.
In aim of articles, the major goal of the papers is discussed. While in sensors, comparison is done between the
sensors used in these gloves and what impact that has on the result. And finally, in feature comparison, analysis
is done to differentiate the differences in the papers that are achieving the same outcome.

3. RESULTS AND DISCUSSION


Several articles that have been reviewed by the authors have a similar main goal that is to design the
glove that can translate the sign language into text. The majority of articles use flex and accelerometer to
identify gestures. However, some of them use different types of gloves like touch sensor glove and surface
electromyography (sEMG). These gestures are then either converted to text or to speech. To convert the
gestures into meaningful data the articles have used various approaches like machine learning, database and
different algorithms.
A good number of articles used microcontroller to process the sensor data, by dividing the sensor
values into ranges for each gesture [3] and [4]. However, this methodology could cause error when the
sensors are not in range [1], [5], [6] and [7]. A machine learning algorithm was used to determine the gesture
in some of the articles [2] and [8]. While some use lookup table to determine the gestures [9]–[11], and [12],
the article [13] and [14] used a data segmentation method. Data segmentation is based on a threshold-based
method to extract 21 features for each data segment. As the classification model, the authors used the
multivariate Gaussian distribution with diagonal covariance matrix. The multi-objective Bayesian framework
for feature selection (BFFS) is implemented, with two objectives being discriminability power and fault
tolerance maximization, to improve the recognition accuracy and reduce the model complexity by selecting a
set of the 21 features. Natesh et al. [10] designed a glove for eight different sign languages. They were

A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
524  ISSN: 2252-8938

Australian sign language (AUSLAN), British sign language (BSL), New Zealand sign language (NZSL),
Indian sign language (ISL), deaf blind sign language (DBSL), Czech sign language (CSL), British, Australian
and New Zealand sign language (BANZSL) and standard manual sign language (SMSL).
The authors in [15] and [16] used sEMG to get the data, which was processed for segmentation.
After segmentation, features were extracted. This was done for the inertial sensors and static sensors. Then
both these features were cascaded to select the best subset of the feature. After determining the subset, it was
passed through classification model to recognize the gesture.
In [17] and [18] features were extracted from the sensors. These features were passed through a
support vector machines (SVM) based classifier to recognize the gesture made. In [19] and [20] gesture
recognition algorithm is devised which checks for contact sensor if they provide a similar gesture. if the
gesture is not found, the code looks for any resemblance regarding the flex sensors. These two stages can
determine almost all the gesture except the dynamic ones. To determine the dynamic gestures the inertial
sensor is used. In [21] and [22] no dynamic gestures were recognized as the glove could only recognize static
gestures. To process the gesture, three layers of nodes were used. The first layer hard 7 nodes which were the
values from the seven sensors. These 7 nodes were converted into 52 nodes by applying weight on them. The
final layer had 26 nodes; each node denotes to an alphabet. The authors in [23]–[31] and [32] stored the
sensor values into a file which was loaded into a LabView program. This program receives the values from
the file and matches this data with gestures close to American sign language (ASL) gestures. In article [33]–
[36] and [37], the gestures for each letter were processed by storing the voltage levels of the sensors into the
database. To determine a gesture, the program compares the sensor values to the ones in the database.
In [38] and [39], a learning mode is used to train the system. the system stores values from the gloves
in a comma-separated values (CSV) file 20 times. These readings are used to evaluate the range of each gesture
for this specific user. Whenever the user starts communicating using the glove, the program uses the sensor
values and look for the value of the gesture made by the user in the CSV file. The authors in [40] and [41] used
an SVM base classifier. The sensor values for each gesture are used to train the SVM classifier. The model
generated is used to predict the gesture made. In [42], [43] and [44], online gesture recognition model is
proposed which consists of two parts: segmentation with threshold-based and classification with a probabilistic
model. The system determines the sensor values and uses the threshold to determine which gesture it is. To
validate the gesture probabilistic model is used, a very inefficient method was used in [35], [45] and [46], where
the gestures were recorded manually. The user has to make the gesture and select the alphabet. This solution
will not detect the gesture if there is a slight difference in the sensor readings. In [47]–[49] and [50] the sensor
values were obtained and passed through multiple algorithms to evaluate the accuracy. The glove was able to
achieve more accuracy than previous experiments. Figure 1 shows the overall design for sign language
recognition for most systems. Figure 1 shows the overall design for sign language recognition for most systems.
The articles presented in this review have been summarized in Table 1 (APPENDIX).

Figure 1. Block diagram of common sign language recognition system

4. CONCLUSION
This review was centered on the characteristics of the Glove based wearable devices, it uncovered
numerous concepts and how they are utilized. The reach and complexity of the articles highlights the
algorithms and sensors used to develop these gloves. Most of the gloves did not have an accelerometer to get
the dynamic gestures. They have used different methodology. However, this methodology of using flex to
identify dynamic gestures could produce ambiguous results. Some of the gloves could not process data in
real-time, which could be a real drawback as the application is to talk in real-time. Some gloves required
training to be used, which is a problem when the glove is constantly switched between multiple users.
Various articles did not give adequate information about their test, test plan and accuracy of the system.
Certain gloves had a delay in reading the gestures, so the hand should stay still for a short period of time
before moving to the next gesture to allow for the system to interpret it. While some places delay was
introduced to process the data. This is very inconvenient as the communication speed would be quite slow. In
terms of accuracy, half of the papers reviewed achieved an accuracy of 90% or more. While six of them had

Int J Artif Intell, Vol. 12, No. 2, June 2023: 522-531


Int J Artif Intell ISSN: 2252-8938  525

accuracy of 70% to 90%. Some paper had multiple accuracies for different algorithm or design. One of the
major factors in getting better accuracy was use of either contact sensors or accelerometer to get dynamic
gestures. The other factor that influenced accuracy was the algorithm used. The algorithms with higher
accuracy were usually the ones with machine learning algorithm or using different algorithms like k-nearest
neighbors (k-NN), segmentation and gesture recognition. The ones with the lower accuracies were the one
which used raw values to determine the gesture. Another factor was the hand size that could have an impact
on the accuracy. If the size of the hand is big, it would mean that flex sensors would bend properly. Proper
bending of flex sensors would create a higher range of values which would increase the accuracy. Moreover,
some of the papers were not able to determine the accuracy or did not mention it in the paper. Future
improvements can be done on this topic, by designing the glove based on the user to yield better results or
compare different hand sizes to know the level of inaccuracy. This could be helpful in determining whether
glove will work with others or not. Use of different sensors could be another improvement, this could help in
making a more flexible glove.

APPENDIX

Table 1. Feature comparison


Paper Sign Sensor positioning/ Accuracy/ Method used Time taken limitations Future study
Language Components used Efficiency to process proposed
the input
1 ASL 26 Four flex sensors. Not mentioned To develop a cheap Not Accuracy Making the
alphabets sensory data glove mentioned limited by glove wireless.
to help disabled size of the Addition of
people hand accelerometer.
communicating Speaker to listen
to the converted
gestures.
2 ASL 26 Two flex sensors are 95% A machine learning Not Not Increase in the
alphabets used for thumb and algorithm mentioned mentioned size of database.
and 15 pinky, and a pair of Integrating the
words. sensors for each glove with other
other finger devices at home.
5 ASL 26 The flex sensors and 74% Five flex sensors, 0.74 Soldering Wireless gloves.
alphabets. the accelerometer an GY61 seconds to defect on
sewed to a white accelerometer, translate thumb’s flex
cotton glove Arduino Mega the gesture sensor
2560, micro-SD into text
card reader module, and speech
liquid crystal
display (LCD). A
gesture recognition
algorithm
9 American A leather glove with 90% Lookup table, Not Only static Improve
sign 11 flex sensors, one template matching mentioned gestures. accuracy of the
language for each finger (5) along with system
(ASL) and and one for each statistical pattern
the abduction (4). Two recognition
Pakistan extra sensors are
sign used for measuring
language the roll and pitch of
(PKL) the wrist.
13 ASL 26 Five flex sensors 76.1% for the Data segmentation Not Ambiguity Extra contact
alphabets, connected in parallel Multivariate is based on a mentioned in gestures sensors to
full stop, with five contact Gaussian threshold-based lea dig to remove
space, and sensors and an distribution. method. error in ambiguity.
resting accelerometer. 77.9% for the Multivariate determining
(fully- There are also seven Multi- Gaussian the letter.
stretch fabric electrical objective distribution with
finger). contacts, two Bayesian diagonal covariance
Total of 29 positives, placed on framework for matrix for
gestures. the index and thumb feature classification.
fingers, and five selection The multi-objective
negatives, placed on (BFFS) Bayesian
lower, top, front of framework for
the middle finger, feature selection
index finger, and (BFFS) is
pinky finger. implemented to
reduce complexity.

A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
526  ISSN: 2252-8938

Table 1. Feature comparison (continue…)


Paper Sign Sensor Accuracy/ Method used Time taken to limitations Future study
Language positioning/ Efficiency process the proposed
Components used input
33 ASL 26 Flex and contact 91.54% It uses the k- 500ms. Only identifies Updating the
alphabets. sensor the length nearest Data 26 alphabets. system to
of the fingers neighbors (k- segmentation identify words
NN) algorithm was used to and sentences.
to identify the identify change Reduce the
alphabets. in gesture. time taken to
Database used to identify the
store various alphabets to
alphabets. less than
500ms.
15 ASL SIGN Four major 96% The data A quiet period Large number The paper
80 gestures muscle groups collected from of 2-3 s of gestured suggests that to
are chosen to three sessions of between may be recognize
place four the same subject gestures is difficult to continuous
channel sEMG are put together required predict using sentence, a
electrodes: and a 10-fold the method different
1) extensor cross validation suggested in segmentation
digitorum, is done for the this paper. technique or
2) flexor carpi data collected Wearable other
radialis longus, from each inertial sensor possibility
3) extensor carpi subject and sEMG- models should
radialis longus, separately. based sign be considered.
and language
4) extensor carpi recognition
ulnaris. (SLR) system
The inertial is that the
measurement unit facial
(IMU) sensor is expression is
worn on the not captured.
wrist, where a
smart watch
is usually placed.
3 ASL 26 It uses 8 92% not mentioned not mentioned
Alphabets independent Uuses python Recognizes
and 0-9 capacitive touch
code and RPi alphabets
numbers sensors. 5 placed
at fingertips and firmware. within 0.7sec.
3 sensors placed There was a
between index, countdown of 3
middle, ring and sec between
pinky fingers. two gestures.
10 All eight The glove The two Gesture not mentioned not mentioned
sign consists of 9 flex The system subsystems recognition
languages and 8 contact interconnected system is 41.1
efficiency was
AUSLAN, sensors placed in through a pair of milliseconds in
BSL, appropriate found to be TIs’ CC2541 IM and 151.5
BANZL, positions on 80.06% in bluetooth low milliseconds in
NZSL, ISL, fingertips and identification energy (BLE) EIM. It is thus
DBSL, flex sensors on mode (IM) and modules. capable of
CSL, the outer region was enhanced recognizing 24
SMSL of the fingers. to 93.16% in gestures per
second in IM
enhanced
and 6 gestures
identification per second in
mode (EIM) EIM
17 ASL 26 Five flex-sensors The proposed 10 sec for every not mentioned Future work
Alphabets along the length Accuracy Rate system in this letter with a 3s proposes
on the outer paper utilizes gap between design of a
of 65.7% can
surface of each five flex-sensors, two signs smaller sized
finger. In case of be achieved two pressure printed circuit
the second without sensors, and a board, the
version two pressure three-axis inclusion of
additional sensors and inertial motion words and
pressure sensors 98.2% sensor sentences at the
were placed and accuracy with sign language
on the left side of level, and
pressure
the first joint of instantly
the middle finger, sensors on the audible voice
middle finger. output.

Int J Artif Intell, Vol. 12, No. 2, June 2023: 522-531


Int J Artif Intell ISSN: 2252-8938  527

Table 1. Feature comparison (continue…)


Paper Sign Sensor Accuracy/ Method used Time taken limitations Future study
Language positioning/ Efficiency to process proposed
Components used the input
19 American Flex sensors are Gesture Not Not Not mentioned
sign connected on the The system recognition mentioned mentioned
language fingers (Dorsal algorithm is used.
gave
(ASL) 26 side of hand). First it checks the
alphabets Contact sensors accuracy of contact sensors
are connected at 92 % on and see if there is
multiple places trained ASL any equivalent
depending on the testers while gesture. Then it
gestures. While it should compares with
the inertial sensor 81% on flex sensor
is connected on reading to refine
amateur
the tip of ring the gestures. And
finger. testers. finally, it uses
inertial sensor to
finalize the
gesture.
21 American Seven sensor Three-layer Sampling = Does not Use of camera to
sign gloves are used. 88% algorithm is used, 4 times per have determine sign
language Five sensors are first layer passes second. dynamic using facial
(ASL) 24 placed on fingers the raw sensor 0.75s gestures. Or expressions. Use
alphabets and thumb. One values to the required to gestures with of speech engine to
plus two sensor is to second layer. determine two hands. speak the text from
punctuation measure the tilt Second layer has the letter. the gestures. Extra
symbols. while the last one 52 nodes. It sensors to
is to measure the applies weight to determine the body
rotation the input and language to aid in
passes to the third sign determination.
layer. The third
layer consists of
26 nodes, each
node denoting one
alphabet.
23 American Eighteen sensors A Labview Cannot Cannot Development of
sign are used on the 90% program collects process process real- wearable glove
language glove. Two data and saves it real-time time that recognizes and
(ASL) 26 resistive bend to a file. This data translate sign
alphabets sensors on each is analyzed and language to spoken
finger, four used to train English. And
abduction sensors neural network. translating spoken
and sensors While another English to sign
measuring thumb program uses the language.
crossover, palm data from glove to
arch, wrist analyze the ASL
flexion and sign. After
abduction. determining the
sign, the program
plays the sign.
34 American It has six flex Every gesture has Not Not Not mentioned
sign sensors placed on 83% different voltage mentioned mentioned
language fingers, thumb levels, the system
(ASL) 26 and wrist, and compares voltage
alphabets three contact level from gesture
sensors placed on made by the glove
fore finger, and identify the
middle finger and alphabet.
thumb.
38 American Five flex sensors Static = The data is 1000ms. Mismatches Use of contact
sign are placed on 95% collected from the in words sensors on the tip
language fingers and glove and with similar of the finger.
(ASL) and thumb. Inertial Dynamic = processed using gesture. Using a left-hand
Arabic sign sensor mpu6050 Arduino. And glove. Increasing
88%
language is also connected. outputted using a the size of the
(ArSL) graphical user glove. Making the
interface (GUI) system
program made multilingual.
using python3.

A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
528  ISSN: 2252-8938

Table 1. Feature comparison (continue…)


Paper Sign Sensor positioning/ Accuracy/ Method used Time limitations Future study
Language Components used Efficiency taken to proposed
process
the input
24 American Five flex sensors are Not The sensors give Not Not Replace flex
sign placed on fingers and mentioned their raw values to mentioned mentioned sensor to increase
language thumb. data acquisition efficiency. Make
(ASL) 26 cards (DAQ) card the glove
alphabets and Lab view wireless. Make
program. The the program
program converts standalone to
the letter to binary make it portable.
code. The binary
code is then used to
translate letters and
words. The
program then
converts it to text
and audio.
25 American Five flex sensors are 95% The system Not Not Add more
sign placed on fingers and determines the mentioned mentioned sensors like
language thumb. alphabets from the gyroscope.
(ASL) 26 flex sensors, then Enhance speech
alphabets uses accelerometer synthesis.
and contact sensors Wireless
if the gesture is not communication
found.
26 Indian sign Five flex sensors are Limited The sensors give Not The accuracy Add more
language placed on fingers and by hand their raw values to mentioned of the system sensors to
thumb. size Arduino. The is limited by recognize full
Arduino recognizes the hand size. sign language.
the gesture and Smaller
outputs it to LCD hands can be
screen and send it more
to Bluetooth accurate due
module. A to larger bend
smartphone can be of the sensor.
connected to the Does not
Bluetooth module have
to get the gesture dynamic
recognized. gestures.
27 American Five flex sensors are Not The sensors give Not Does not Increase scope by
sign placed on fingers and mentioned their raw values to mentioned have adding another
language thumb. Arduino nano. The dynamic glove
(ASL) 26 Arduino nano gestures.
alphabets transmit the data
through transmitter
to Arduino mega.
The Arduino mega
converts the signals
into gesture and
displays the gesture
on LCD screen.
Raspberry pi is
used to play the
sound of the sign
recognized.
40 American Five flex sensors of ASL = The data from the 2s Does not Increase number
sign fingers and thumb 98.91% sensors is have all the of sensors. Add
language and an MPU6050 ISL = processed by gestures. additional glove.
(ASL) and sensor 100% feeding it into an
Indian sign SVM based
language classifier. After
(ISL) determining the
gesture, the
Arduino sends the
data through
Bluetooth which is
played on speaker
after processing it.

Int J Artif Intell, Vol. 12, No. 2, June 2023: 522-531


Int J Artif Intell ISSN: 2252-8938  529

Table 1. Feature comparison (continue…)


Paper Sign Sensor Accuracy/ Method used Time taken limitations Future study
Language positioning/ Efficiency to process proposed
Components the input
used
42 American sign Flex sensors are 73.6% An online gesture 0.5s Less Compare each
language connected on the recognition model is used features detected word
(ASL) 26 fingers and which consists of two that disrupt with an
alphabets plus thumb. While the parts segmentation and the extensive
two inertial sensor is classification. accuracy vocabulary set
punctuation connected on the Segmentation is used to to increase
symbols. back of the hand. determine the gesture efficiency.
made and classification is
done to form the
sentences. The recognized
gestures will be sent to
the speaker after the
sentence finishes.
35 American sign Flex sensors are 86% The system has two 3s Not Use of
language placed on the modes, teach, and learn mentioned additional
(ASL) 26 fingers and mode. In teach mode the sensors and
alphabets thumb. user can store the gestures camera to aid
made, while in learn in detection.
mode the user can check Use of speech
if the gesture made is engine to play
correct or not. the translated
text.
47 American sign Flex sensors are 98% The data from the sensors Not Not Not mentioned
language placed on the is processed by mentioned mentioned
(ASL) 26 fingers and comparing it to the
alphabets thumb. existing voltage level to
determine the gesture
made. After recognizing
the gesture, it’s played on
the mp3 player module.

REFERENCES
[1] G. Kumar, M. K. Gurjar, and S. B. Singh, “American sign language translating glove using flex sensor,” Imperial journal of
interdisciplinary research, no. 6, pp. 1439–1441, 2016.
[2] S. Bin Rizwan, M. S. Z. Khan, and M. Imran, “American sign language translation via smart wearable glove technology,” RAEE
2019-International Symposium on Recent Advances in Electrical Engineering, 2019, doi: 10.1109/RAEE.2019.8886931.
[3] K. S. Abhishek, L. C. F. Qubeley, and D. Ho, “Glove-based hand gesture recognition sign language translator using capacitive
touch sensor,” 2016 IEEE International Conference on Electron Devices and Solid-State Circuits, EDSSC 2016, pp. 334–337,
2016, doi: 10.1109/EDSSC.2016.7785276.
[4] M. Elmahgiubi, M. Ennajar, N. Drawil, and M. S. Elbuni, “Sign language translator and gesture recognition,” GSCIT 2015-Global
Summit on Computer and Information Technology-Proceedings, 2015, doi: 10.1109/GSCIT.2015.7353332.
[5] R. Ambar, C. K. Fai, M. H. Abd Wahab, M. M. Abdul Jamil, and A. A. Ma’Radzi, “Development of a wearable device for sign
language recognition,” Journal of Physics: Conference Series, vol. 1019, no. 1, 2018, doi: 10.1088/1742-6596/1019/1/012017.
[6] K. N. Tarchanidis and J. N. Lygouras, “Data glove with a force sensor,” IEEE Transactions on Instrumentation and
Measurement, vol. 52, no. 3, pp. 984–989, 2003, doi: 10.1109/TIM.2003.809484.
[7] J. Wang and Z. Ting, “An ARM-based embedded gesture recognition system using a data glove,” 26th Chinese Control and
Decision Conference, CCDC 2014, pp. 1580–1584, 2014, doi: 10.1109/CCDC.2014.6852419.
[8] R. H. Liang and M. Ouhyoung, “A real-time continuous gesture recognition system for sign language,” Proceedings-3rd IEEE
International Conference on Automatic Face and Gesture Recognition, FG 1998, pp. 558–567, 1998, doi:
10.1109/AFGR.1998.671007.
[9] Y. Khambaty et al., “Cost effective portable system for sign language gesture recognition,” 2008 IEEE International Conference
on System of Systems Engineering, SoSE 2008, 2008, doi: 10.1109/SYSOSE.2008.4724149.
[10] A. Natesh, G. Rajan, B. Thiagarajan, and V. Vijayaraghavan, “Low-cost wireless intelligent two hand gesture recognition
system,” 11th Annual IEEE International Systems Conference, SysCon 2017-Proceedings, 2017, doi:
10.1109/SYSCON.2017.7934745.
[11] J. L. Hernandez-Rebollar, R. W. Lindeman, and N. Kyriakopoulos, “A multi-class pattern recognition system for practical finger
spelling translation,” Proceedings-4th IEEE International Conference on Multimodal Interfaces, ICMI 2002, pp. 185–190, 2002,
doi: 10.1109/ICMI.2002.1166990.
[12] J. Wu, W. Gao, Y. Song, W. Liu, and B. Pang, “A Simple sign language recognition system based on data glove,” International
Conference on Signal Processing Proceedings, ICSP, vol. 2, pp. 1257–1260, 1998, doi: 10.1109/icosp.1998.770847.
[13] N. Tanyawiwat and S. Thiemjarus, “Design of an assistive communication glove using combined sensory channels,”
Proceedings-BSN 2012: 9th International Workshop on Wearable and Implantable Body Sensor Networks, pp. 34–39, 2012, doi:
10.1109/BSN.2012.17.
[14] C. Oz and M. C. Leu, “Recognition of finger spelling of american sign language with artificial neural network using
position/orientation sensors and data glove,” Lecture Notes in Computer Science, vol. 3497, no. II, pp. 157–164, 2005, doi:
10.1007/11427445_25.

A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
530  ISSN: 2252-8938

[15] J. Wu, L. Sun, and R. Jafari, “A wearable system for recognizing American sign language in real-time using IMU and surface
EMG sensors,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 5, pp. 1281–1290, 2016, doi:
10.1109/JBHI.2016.2598302.
[16] C. Oz and M. C. Leu, “American sign language word recognition with a sensory glove using artificial neural networks,”
Engineering Applications of Artificial Intelligence, vol. 24, no. 7, pp. 1204–1213, 2011, doi: 10.1016/j.engappai.2011.06.015.
[17] B. G. Lee and S. M. Lee, “Smart wearable hand device for sign language interpretation system with sensors fusion,” IEEE
Sensors Journal, vol. 18, no. 3, pp. 1224–1232, 2018, doi: 10.1109/JSEN.2017.2779466.
[18] N. Sriram and M. Nithiyanandham, “A hand gesture recognition based communication system for silent speakers,” 2013
International Conference on Human Computer Interactions, ICHCI 2013, 2013, doi: 10.1109/ICHCI-IEEE.2013.6887815.
[19] S. S. Ahmed, H. Gokul, P. Suresh, and V. Vijayaraghavan, “Low-cost wearable gesture recognition system with minimal user
calibration for asl,” Proceedings-2019 IEEE International Congress on Cybermatics: 12th IEEE International Conference on
Internet of Things, 15th IEEE International Conference on Green Computing and Communications, 12th IEEE International
Conference on Cyber, Physical and So, pp. 1080–1087, 2019, doi: 10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00185.
[20] H. Kaur and J. Rani, “A review: Study of various techniques of Hand gesture recognition,” 1st IEEE International Conference on
Power Electronics, Intelligent Control and Energy Systems, ICPEICES 2016, 2017, doi: 10.1109/ICPEICES.2016.7853514.
[21] S. A. Mehdi and Y. N. Khan, “Sign language recognition using sensor gloves,” ICONIP 2002-Proceedings of the 9th
International Conference on Neural Information Processing: Computational Intelligence for the E-Age, vol. 5, pp. 2204–2206,
2002, doi: 10.1109/ICONIP.2002.1201884.
[22] M. I. Saleem, P. Otero, S. Noor, and R. Aftab, “Full duplex smart system for deaf dumb and normal people,” 2020 Global
Conference on Wireless and Optical Technologies, GCWOT 2020, 2020, doi: 10.1109/GCWOT49901.2020.9391593.
[23] J. M. Allen, P. K. Asselin, and R. Foulds, “American sign language finger spelling recognition system,” Proceedings of the IEEE
Annual Northeast Bioengineering Conference, NEBEC, pp. 285–286, 2003, doi: 10.1109/nebc.2003.1216106.
[24] S. Kumuda and P. K. Mane, “Smart sssistant for deaf and dumb using flexible resistive sensor : Implemented on LabVIEW
platform,” Proceedings of the 5th International Conference on Inventive Computation Technologies, ICICT 2020, pp. 994–1000,
2020, doi: 10.1109/ICICT48043.2020.9112553.
[25] A. Arif, S. T. H. Rizvi, I. Jawaid, M. A. Waleed, and M. R. Shakeel, “Techno-talk: An American sign language (ASL) translator,”
International Conference on Control, Decision and Information Technologies, CoDIT 2016, pp. 665–670, 2016, doi:
10.1109/CoDIT.2016.7593642.
[26] K. S. and V. M. H. Joshi, S. Bhati, “Detection of finger motion using flex sensor for assisting speech impaired,” IJIRSET, 2017,
doi: 10.15680/IJIRSET.2017.0610172.
[27] G. Sabaresh and A. Karthi, “Design and implementation of a sign-to-speech/text system for deaf and dumb people,” IEEE
International Conference on Power, Control, Signals and Instrumentation Engineering, ICPCSI 2017, pp. 1840–1844, 2018, doi:
10.1109/ICPCSI.2017.8392033.
[28] J. M. Rehg and T. Kanade, “Digiteyes: vision-based hand tracking for human-computer interaction,” Motion of Non-Rigid and
Articulated Obgects Workshop, Proceedings, pp. 16–22, 1994, doi: 10.1109/mnrao.1994.346260.
[29] S. S. Patil, “Sign language interpreter for deaf and dumb people,” International Journal for Research in Applied Science and
Engineering Technology, vol. 7, no. 9, pp. 354–359, 2019, doi: 10.22214/ijraset.2019.9050.
[30] L. K. Simone, E. Elovic, U. Kalambur, and D. Kamper, “A low cost method to measure finger flexion in individuals with reduced
hand and finger range of motion,” Annual International Conference of the IEEE Engineering in Medicine and Biology-
Proceedings, vol. 26 VII, pp. 4791–4794, 2004, doi: 10.1109/iembs.2004.1404326.
[31] R. Gentner and J. Classen, “Development and evaluation of a low-cost sensor glove for assessment of human finger movements in
neurophysiological settings,” Journal of Neuroscience Methods, vol. 178, no. 1, pp. 138–147, 2009, doi:
10.1016/j.jneumeth.2008.11.005.
[32] H. Brashear, V. Henderson, K. H. Park, H. Hamilton, S. Lee, and T. Starner, “American sign language recognition in game
development for deaf children,” Eighth International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS
2006, vol. 2006, pp. 79–86, 2006, doi: 10.1145/1168987.1169002.
[33] V. Pathak, S. Mongia, and G. Chitranshi, “A framework for hand gesture recognition based on fusion of flex, contact and
accelerometer sensor,” Proceedings of 2015 3rd International Conference on Image Information Processing, ICIIP 2015, pp.
312–319, 2016, doi: 10.1109/ICIIP.2015.7414787.
[34] D. Bajpai, U. Porov, G. Srivastav, and N. Sachan, “Two way wireless data communication and American sign language translator
glove for images text and speech display on mobile phone,” Proceedings-2015 5th International Conference on Communication
Systems and Network Technologies, CSNT 2015, pp. 578–585, 2015, doi: 10.1109/CSNT.2015.121.
[35] K. Kadam, R. Ganu, A. Bhosekar, and S. D. Joshi, “American sign language interpreter,” Proceedings - 2012 IEEE 4th
International Conference on Technology for Education, T4E 2012, pp. 157–159, 2012, doi: 10.1109/T4E.2012.45.
[36] Z. Lu, X. Chen, Q. Li, X. Zhang, and P. Zhou, “A hand gesture recognition framework and wearable gesture-based interaction
prototype for mobile devices,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 2, pp. 293–299, 2014, doi:
10.1109/THMS.2014.2302794.
[37] R. McKee and J. Napier, “Interpreting into international sign pidgin,” Sign Language & Linguistics, vol. 5, no. 1, pp. 27–54,
2002, doi: 10.1075/sll.5.1.04mck.
[38] S. A. E. El-Din and M. A. A. El-Ghany, “Sign language interpreter system: An alternative system for machine learning,” 2nd
Novel Intelligent and Leading Emerging Sciences Conference, NILES 2020, pp. 332–337, 2020, doi:
10.1109/NILES50944.2020.9257958.
[39] M. A. Ahmed, B. B. Zaidan, A. A. Zaidan, M. M. Salih, and M. M. Bin Lakulu, “A review on systems-based sensory gloves for
sign language recognition state of the art between 2007 and 2017,” Sensors (Switzerland), vol. 18, no. 7, 2018, doi:
10.3390/s18072208.
[40] M. M. Chandra, S. Rajkumar, and L. S. Kumar, “Sign languages to speech conversion prototype using the SVM classifier,” IEEE
Region 10 Annual International Conference, Proceedings/TENCON, vol. 2019-Octob, pp. 1803–1807, 2019, doi:
10.1109/TENCON.2019.8929356.
[41] P. Vijayalakshmi and M. Aarthi, “Sign language to speech conversion,” 2016 International Conference on Recent Trends in
Information Technology, ICRTIT 2016, 2016, doi: 10.1109/ICRTIT.2016.7569545.
[42] S. Vutinuntakasame, V. R. Jaijongrak, and S. Thiemjarus, “An assistive body sensor network glove for speech-and hearing-
impaired disabilities,” Proceedings-2011 International Conference on Body Sensor Networks, BSN 2011, pp. 7–12, 2011, doi:
10.1109/BSN.2011.13.

Int J Artif Intell, Vol. 12, No. 2, June 2023: 522-531


Int J Artif Intell ISSN: 2252-8938  531

[43] K. Kania and U. Markowska-Kaczmar, “American sign language fingerspelling recognition using wide residual networks,”
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 10841 LNAI, pp. 97–107, 2018, doi: 10.1007/978-3-319-91253-0_10.
[44] K. W. Kim, M. S. Lee, B. R. Soon, M. H. Ryu, and J. N. Kim, “Recognition of sign language with an inertial sensor-based data
glove,” Technology and Health Care, vol. 24, no. s1, pp. S223–S230, 2015, doi: 10.3233/THC-151078.
[45] A. Gul, B. Zehra, S. Shah, N. Javed, and M. I. Saleem, “Two-way smart communication system for deaf dumb and normal
people,” ICISCT 2020-2nd International Conference on Information Science and Communication Technology, 2020, doi:
10.1109/ICISCT49550.2020.9080028.
[46] X. Cai, T. Guo, X. Wu, and H. Sun, “Gesture recognition method based on wireless data glove with sensors,” Sensor Letters, vol.
13, no. 2, pp. 134–137, 2015, doi: 10.1166/sl.2015.3454.
[47] M. S. Amin, M. T. Amin, M. Y. Latif, A. A. Jathol, N. Ahmed, and M. I. N. Tarar, “Alphabetical gesture recognition of American
sign language using E-voice smart glove,” Proceedings - 2020 23rd IEEE International Multi-Topic Conference, INMIC 2020,
2020, doi: 10.1109/INMIC50486.2020.9318185.
[48] S. K. Verma, A. P. J. Abdul, R. Kesarwani, and G. Kaur, “HANDTALK : Interpreter for the differently abled: A Review,”
International Journal of Innovative Research and Creative Technology www.ijirct.org, vol. 1, no. 4, pp. 402–404, 1201.
[49] P. Sharma and N. Sharma, “Gesture recognition system,” Proceedings-2019 4th International Conference on Internet of Things:
Smart Innovation and Usages, IoT-SIU 2019, 2019, doi: 10.1109/IoT-SIU.2019.8777487.
[50] L. Yin, M. Dong, Y. Duan, W. Deng, K. Zhao, and J. Guo, “A high-performance training-free approach for hand gesture
recognition with accelerometer,” Multimedia Tools and Applications, vol. 72, no. 1, pp. 843–864, 2014, doi: 10.1007/s11042-013-
1368-1.

BIOGRAPHIES OF AUTHORS

Soly Mathew Biju is an associate professor at the University of Wollongong in


Dubai. She leads the Global health and Wellbeing cluster at UOWD. She is also a Discipline
Leader for Computer Science. She comes with 23 years of experience in Industry and
Academia. She is also an Internationally certified software testing professional (ISTQB)
achieved Mastery in IBM Security Intelligence with expertise and research interest in Machine
learning techniques, Sensor based wearable applications in the field of health and wellbeing.
Apart from this she is also interested in Innovative teaching methods and technology enhanced
subject delivery. She has acquired grants for and lead a number of research projects as
Principle Investigator. She has published around 40 research articles including conference
papers. She also conducts executive IBM training program in Security Intelligence engineer
for professionals. She founded the FEIS technical workshop series. She is the university
representative at Red Hat Academy and Education Liaison Officer at BCS Middle East
Chapter. She can be contacted at email: [email protected].

Dr. Obada Al-Khatib received the B.Sc. degree (Hons.) in electrical engineering
from Qatar University, Qatar, in 2006, the M.Eng. degree (Hons.) in communication and
computer from the National University of Malaysia, Bangi, Malaysia, in 2010, and the Ph.D.
degree in electrical and information engineering from The University of Sydney, Sydney,
Australia, in 2015. From 2006 to 2009, he was an Electrical Engineer with Consolidated
Contractors International Company, Qatar. In 2015, he joined the Centre for IoT and
Telecommunications, The University of Sydney, as a Research Associate. Since 2016, he has
been an Assistant Professor with the Faculty of Engineering and Information Sciences,
University of Wollongong in Dubai, UAE. His current research interests include machine
learning for 5G and beyond, IoT, vehicular networks, green networks, and cloud/edge
computing. He can be contacted at email: [email protected].

Hashir Zahid Sheikh received the B.E. degree in computer engineering from
University of Wollongong in Dubai, Dubai, United Arab Emirates, in 2019. He is a computer
engineer at AppsPro. Current research interests’ robotic systems, machine learning, IoT and
communication. Previously worked on Implementation of a sustainable and scalable vertical
micro-farm. He can be contacted at email: [email protected].

A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)

You might also like