Sign Language

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A Survey on Sign Language Recognition Using

Smartphones

Sakher Ghanem1,2 , Christopher Conly1 , Vassilis Athitsos1


1
Department of Computer Science and Engineering, University of Texas at Arlington,
Arlington, Texas, USA
2
Faculty of Computing and Information Technology, University of Jeddah,
Jeddah, Saudi Arabia
[email protected], [email protected], [email protected]

ABSTRACT with hearing people can be a challenge. Similarly, impor-


Deaf people around the globe use sign languages for their tant information technology and social connectivity tools
communication needs. Innovations of new technologies, such are not available to sign language users, unless the users
as smartphones, offer a host of new functionalities to their are willing to access such tools using a spoken and writ-
users. If such mobile devices become capable of recogniz- ten language, such as English, with which they may not be
ing sign languages, this will open up the opportunity for comfortable. Technological innovations in automated sign
offering significantly more user-friendly mobile apps to sign recognition have the potential to help sign language users
language users. However, in order to achieve satisfactory re- overcome such obstacles, by facilitating both communica-
sults, there are many challenges that must be considered and tion with hearing people, and human-computer interaction.
overcome, such as light conditions, background noise, pro- Mobile computing has entered a new era where mobile
cessing, and energy limitations. This paper aims to cover phones are powerful enough to be used in such advanced ap-
the most recent techniques in mobile-based sign language plications as gesture and sign language recognition. Many
recognition systems. We categorize existing solutions into of the newly designed smartphones are equipped with multi-
sensors-based and vision-based, as these two categories offer core processors, a high-quality GPU, and a high-resolution
distinct advantages and disadvantages. The primary focus camera that can reach 12MP and more. These high-tech
of this literature review is on two main aspects of sign lan- features allow the devices to execute computationally inten-
guage recognition: feature detection and sign classification sive tasks in less amount of time. In the last decade, many
algorithms. applications of computer vision have been limited to desk-
tops, and now with the availability of advanced processor-
equipped smartphones, computer vision is primed to experi-
CCS Concepts ence a transformation to provide new experiences via mobile
•Computing methodologies → Computer vision; Ob- devices.
ject detection; Object recognition; Research has shown that ASL has four basic manual com-
ponents: finger configuration of the hands, movement of the
Keywords hands, orientation of the hands and the location of the hands
with respect to the body [3]. Any automated sign recogni-
Sign Language Recognition; Smartphone; Portable Device; tion system needs two main procedures: the detection of
Survey the features and the classification of the input data. With
mobile phones, the detection process can be affected by the
1. INTRODUCTION movement of the phone, which causes extraneous motion
According to a report from Gallaudet University, which is around the signer. Some techniques use a sensor-based tech-
a prominent educational institution that serves people who nology which tracks the gestures via hand movement using
are deaf or are hard of hearing, there are approximately 38 embedded sensors. Other techniques utilize vision-based ap-
million deaf individuals in the United States [8]. Many of proaches to process images of the captured gesture. Also,
those individuals use a sign language, typically American several researchers suggest using a client-server architecture
Sign Language (ASL), as a primary or secondary form of to speed up processing time.
communication. Sign languages (SLs) are necessarily vi- This literature review covers existing sign language recog-
sual in nature. For sign language users, communicating nition systems designed to run on smartphones. The lack
of a clear overview in this area is the primary motivation
to present this work. This survey presents several existing
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed methods and groups them in different categories. The meth-
for profit or commercial advantage and that copies bear this notice and the full cita- ods are discussed with a focus on the feature detection and
tion on the first page. Copyrights for components of this work owned by others than classification algorithms.
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission The rest of the paper is organized as follows. Section
and/or a fee. Request permissions from [email protected]. 2 discusses the datasets used in this area. Section 3 de-
PETRA ’17, June 21-23, 2017, Island of Rhodes, Greece scribes existing approaches for sign language recognition in
c 2017 ACM. ISBN 978-1-4503-5227-7/17/06. . . $15.00 portable devices, including sensor-based and vision-based
DOI: http://dx.doi.org/10.1145/3056540.3056549
Figure 2: Basic System Architecture.

embedded in smartphones, and appropriate techniques are


used to process the responses from the sensors. In the vision-
based approach, the gestures are observed via a mobile cam-
era, and multiple processing steps are applied to recognize
the signs that appear in the video stream.
Any sign recognition system contains three major steps;
see Figure 2 for an overview. First, the input data is ac-
quired, for example via the phone camera or from some sen-
Figure 1: ASL signs representing numbers 0-9 and sor. The next step is to extract the features from the input
letters of the English alphabet. [32] data. Finally, the sign is classified using some appropriate
algorithm that is compatible with the extracted features.
For each method we examine, we take a close look in how
approaches. Finally, conclusions and possible future direc- that method approaches the problems of feature extraction
tions of the technology are discussed in Section 4. and recognition/classification.

3.1 Sensor-Based Approach


2. SIGN DATASETS The usage of sensors simplifies the detection process and
In general, there are two types of signs: dynamic and makes it faster. At the same time, sensor-based systems
static. Dynamic signs exhibit motion, whereas static signs can be expensive and cumbersome to use, and these fac-
are characterized by a specific static posture. We did not tors discourage adoption by a large number of users. Table
find any dataset that was designed exclusively for sign lan- 1 demonstrates a comparison between existing sensor-based
guage application in portable devices. Some researchers use models that use the phone as a platform. Sensor-based ap-
a static set of gestures, capturing signs for letters of the proaches can be broadly categorized based on whether they
English alphabet and numbers 0-9, e.g., [26]. Figure 1 de- use external sensors, such as gloves, or internal sensors built
picts American Sign Language signs representing numbers into the smartphone. The following two subsections discuss
and letters. In many implemented methods, a customized these two categories.
dynamic dataset is utilized, e.g., [14]. It is difficult to handle
the available datasets that were designed for personal com- 3.1.1 Using Gloves
puters due to the limited storage capacity of mobile phones. Glove-based approaches have been implemented using sen-
sors that track hand gestures. Multiple sensors embedded in
the gloves are used to track the fingers, palm and their loca-
3. SIGN LANGUAGE RECOGNITION US- tion and motion. Such an approach provides coordinates of
the palm and fingers for further processing. These devices
ING SMARTPHONES may be connected wirelessly via Bluetooth.
In sign language recognition, the motion and posture of The detection of hand parameters in this approach relies
the human hand can observed via different approaches. In on a customized glove [14, 20] that contains ten flex sensors
the sensor-based approach, the movement of the hand is to track the posture of each finger. Moreover, a G-sensor is
tracked via sensors attached to wireless gloves or sensors used to monitor the orientation of the hand. Hand motion is
Table 1: A Comparison of Available Sensors Based Systems
Gesture Voc.
System Sensors Classification Method Processing Dependency
Type Size
Kau 2015 [14] Gloves Template Matching Dynamic Local 5 user-independent
Minimum Mean Square
Preetham 2013 [20] Gloves Static Local - -
Error Algorithm
Seymour 2015 [26] Gloves SVM Static Local 31 user-independent
Phone Internal
Choe 2010 [2] DTW Dynamic Local 20 user-independent
Sensors
Phone Internal
Gupta 2016 [6] DTW Dynamic Local 6 user-independent
Sensors
Phone Internal HMM+foreword
Joselli 2009 [11] Dynamic Local 10 user-independent
Sensors -backward algorithm
Phone Internal
Niezen 2008 [17] DTW Dynamic Local 8 user-independent
Sensors
Phone Internal
Wang 2012 [29] own statical method Dynamic Local 21 user-dependent
Sensors

detected by using a gyroscope sensor that calculates angles density function.


of the hands in space. These sensors continuously trace the One of the better-known classification methods is the Dy-
signal to get hand data. The data are transferred wirelessly namic Time Warping (DTW) algorithm, which is applied
to the mobile device. From the gathered data, the state to measure the cost of a selected gesture compared with
of the hand is estimated. This state can be decomposed training data [28, 15]. One of the main advantages of this
into four independent components: hand posture, position, algorithm is that it does not need large amounts of training
orientation, and motion. data, as it can be used even when only a single training ex-
The recognition methods vary by the available input data ample per class is available. DTW is used by [17, 6, 2] to
and the dataset used. Template matching was used in [14] achieve high accuracy, under the assumption that the start
as a classification method using five dynamic sign classes. and end times for every sign are known. Joselli 2009 [11]
In [26], a comparison is made between SVM and neural net- adapted foreword-backward algorithm to classify dynamic
work methods using two different activation functions: log- input signs using Hidden Markov Models (HMM), and using
sigmoid and symmetric Elliott functions. The experiment a database containing ten classes with a total of 400 sam-
was done using static hand gestures, representing letters and ples. Wang 2012 [29] process the data from the sensors to
numbers. In the results, SVM produced better accuracy, but develop a sinusoid-like curve that can be used to extract the
it required 16 times more time for classification, compared pattern of the movement. The axis of the largest variance
to Log-sigmoid neural network and symmetric Elliott neu- between peak and valley is the movement direction.
ral network. The advantage of this method was memory
usage: only 4 MB of memory were required, which makes
this method usable even with low-end smartphones.
3.2 Vision Based Approach
In recent years, the availability and simplicity of smart-
phones has encouraged researchers to utilize them in vision-
3.1.2 Smartphone Internal Sensors Approach based sign language recognition applications. The vision-
Recently, new smartphones have been embedded with sen- based approach uses the phone camera to capture the image
sors that help to detect the posture and motion of the device. or the video of the hand performing signs. These frames
Numerous researchers utilize this feature to create gesture are further processed to recognize the signs, so as to pro-
recognition models. The main issue with this approach is duce text or speech output. Vision-based approaches risk
the limitation of signs details provided by the sensors. producing relatively low accuracy compared to sensor-based
Gestures recognized using such sensors can be decomposed approaches, due to multiple challenges in image processing,
into sequences of two simpler gesture types [29]. Turn ges- like light variations, dependency on the skin color of the
tures correspond to a change in the 3D orientation of the user, complex backgrounds in the image, etc. Table 2 shows
device. An example is rotating the device from the face up a comparison between currently existing vision based meth-
to face down position. Translation gestures correspond to ods. It is important to note that all approaches listed in this
the phone moving in 3D space. Moving the phone up and table use static signs, except Rao 2016 [22] which includes
down is an example of such a gesture. Segmentation of the dynamic signs.
motion is performed to detect the start and end point of Extracting accurate hand features is a major challenge for
the movement. Since the accelerometer continuously reads the vision-based approach. Extraction is affected by many
data of the three axis point in space, a vector containing factors, such as lighting condition and background noise.
the sum of derivatives of the current axis with the previous The more accurate the detection and extraction is, the bet-
axis can be used to detect motion, as done in [11, 17, 29, 2]. ter the recognition results become. Orientation and position
To speed up the calculation time, Gupta 2016 [6] change of the hand can be detected in different ways, for example
mean floating values to integer values by using a probability using skin detection or Viola-Jones cascades of boosted rect-
Table 2: A Comparison of Available Vision Based Systems
Voc.
System Features Extraction Classification Method Processing Dependency
Size
Skin detection HSV,
Elleuch 2015 [4] SVM Local 5 user-independent
convexity defects
Gandhi 2015 [5] Background subtraction Template matching Local - -
Hakkun 2015 [7] Viola-Jones Haar Filters KNN Local 8 user-dependent
Skin detection YCrCb, Local,
Hays 2013 [9] SVM 32 user-independent
canny edge Client-Server
Skin detection RGB,
Jin 2016 [10] SVM Local 16 user-dependent
canny edge, SURF
Joshi 2015 [12] PCA SVM Local 5 user-independent
Kamat 2016 [13] Skin detection RGB Template Matching Local 4 user-dependent
Skin detection RGB,
Lahiani 2015 [16] SVM Local 10 user-dependent
convexity defects
Prasuhn 2014 [19] Skin detection HUV, HOG Brute-force Matching Client-Server 26 user-dependent
Raheja 2015 [21] Sobel Edge Filter, PCA Template Matching Client-Server 10 user-dependent
Gaussian and Sobel Edge
Rao 2016 [22] MDC Local 18 user-independent
Filter + PCA
Backpropagation
Saxena1 2014 [24] Sobel Edge Filter Client-Server 5 user-dependent
Algorithm
Saxena2 2014 [25] Skin detection RGB, PCA Template Matching Client-Server 10 user-dependent
Warrier 2016 [30] Skin detection RGB Geometric Matching Client-Server 11 user-dependent

angle filters [27]. Detecting the position and orientation of plied. Support Vector Machines (SVM) define decision bound-
the hand at each frame accurately also allows us to detect aries between classes, which are linear in some transformed
the motion of the hand for dynamic signs. feature space, but can be highly nonlinear in the original
Skin segmentation algorithms, which often depend on spec- feature space [31]. Several papers use SVMs, e.g., [4, 9, 10,
ifying thresholds [18], are widely used in Computer Vision 12, 16]. Hakkun 2015 [7] use K-Nearest Neighbor (KNN)
applications. The researchers either specify skin thresholds for classification. Another simple technique for classifica-
manually or automatically by taking a skin color sample be- tion is template matching, used by [5, 13, 30, 21, 12]. The
fore the experiment. Several available models use RGB color Backpropagation algorithm [23] can lead to very efficient
space, e.g., [10, 13, 16, 25, 30]. To solve brightness and light- classification timewise, but it needs more training data to
ing problems, [9] use YCrCb color space, [4] employ HSV minimize error rate. Backpropagation is used by [24] as the
color space, and [19] benefit from HUV color space. recognition method. In Rao 2016 [22], because the speed
The Viola-Jones detection method [27], which uses cas- of processing in portable devices is a major factor, a mini-
cades of boosted rectangle filters, is a well-known method, mum distance classifier (MDC) was chosen as a classification
that is commonly used for detecting hands. Some researchers method. The experiments use sentences of signs as training
[7, 4] implement the Viola-Jones method on portable plat- and test data.
forms, as Viola-Jones is relatively easy to implement and Some systems assume that the only visible object in the
has low hardware requirements. Another alternative, used captured image is the hand [7, 4], while the more advanced
by [12, 21, 22, 25], is Principal Component Analysis (PCA). models manage to capture both hands and face. One way
Additional hand details are also extracted by various meth- to remove the confusion between a face and hand area is to
ods. Examples of such details are the number of open fingers subtract or isolate the face, so that the detection of hand
(measured by finding contours), finding the palm area (by details will be more precise [4]. Another issue that can be
finding the largest circle that fits in the hand region), de- considered is hand angle and hand distance from the mo-
tecting the convex hull, and getting convexity defects [4, 9, bile device. In tests conducted in [7], optimal results were
16]. Canny edge detection [1] can also be used to identify achieved with no more than 50 cm distance between the
the hand area [10]. Likewise, a Sobel Edge filter, which mea- hand and the camera, and the hand being in the upright
sures the changes in value in the highest moving direction, state.
has been used [21, 22, 24]. Prasuhn 2014 [19] apply a Due to slow processing time in some models, a client-
Histogram of Orientation Gradients (HOG) method, which server framework is used. In such a framework, the phone is
is sensitive to the angle of the object, to extract the fea- connected to a regular computer via wireless network. Such
tures from the input image. Another method, used by [5], is an approach was implemented in [19, 21, 24, 25, 30]. A cloud
background subtraction using a motion detection method. service can be used to execute part of recognition operations,
In Jin 2016 [10], Speeded-Up Robust Features (SURF) is as done in [9]. Moreover, Elleuch 2015 [4] implement a
used as an extra feature to improve accuracy. multithreading technique by running face subtraction and
Once the features describing a sign have been extracted, hand pre-processing at the same time, thus decreasing the
there are numerous recognition procedures that can be ap- processing time by half.
4. CONCLUSIONS [7] R. Y. Hakkun, A. Baharuddin, et al. Sign language
In this paper, we have provided a survey of existing tech- learning based on android for deaf and speech
niques for sign language recognition in smartphones. We impaired people. In Electronics Symposium (IES),
discussed sensor-based approaches, which track hand motion 2015 International, pages 114–117. IEEE, 2015.
and/or posture using hardware-based trackers installed in a [8] S. Hamrick, L. Jacobi, P. Oberholtzer, E. Henry, and
glove or inside a smartphone. We also discussed vision-based J. Smith. Libguides. deaf statistics. deaf population of
approaches, which use the phone camera for observing the the us. Montana, 16(616,796):2–7, 2010.
hand. In discussing both types of approaches, we focused on [9] P. Hays, R. Ptucha, and R. Melton. Mobile device to
the detection and feature extraction module as well as the cloud co-processing of asl finger spelling to text
classification module of each approach. conversion. In Image Processing Workshop
Regarding vision-based methods, significant challenges re- (WNYIPW), 2013 IEEE Western New York, pages
main to be overcome by future research, regarding accuracy 39–43. IEEE, 2013.
of hand detection and articulated hand pose estimation, as [10] C. M. Jin, Z. Omar, and M. H. Jaward. A mobile
well as classification accuracy. Most existing vision-based application of american sign language translation via
methods only recognize static gestures, and we expect new image processing algorithms. In Region 10 Symposium
methods to be proposed for handling dynamic gestures. Sim- (TENSYMP), 2016 IEEE, pages 104–109. IEEE, 2016.
ilarly, existing methods typically cover no more than a few [11] M. Joselli and E. Clua. grmobile: A framework for
tens of signs, and there is significant room for improvement touch and accelerometer gesture recognition for mobile
until methods can cover the several thousands of signs that games. In 2009 VIII Brazilian Symposium on Games
users of a sign language employ in their daily usage. Ex- and Digital Entertainment, pages 141–150. IEEE,
tending vision-based recognition systems to cover dynamic 2009.
gestures and thousands of signs may strain the hardware [12] T. J. Joshi, S. Kumar, N. Tarapore, and V. Mohile.
capabilities of smartphones. While smartphone hardware Static hand gesture recognition using an android
specs are expected to continue to improve rapidly, cloud device. International Journal of Computer
processing could push the boundaries further ahead by al- Applications, 120(21), 2015.
leviating the hardware requirements on the mobile device. [13] R. Kamat, A. Danoji, A. Dhage, P. Puranik, and
However, maintaining interactivity and low latency while us- S. Sengupta. Monvoix-an android application for
ing cloud processing can also be challenging, and these are hearing impaired people. Journal of Communications
also issues that we expect future research to focus on. Technology, Electronics and Computer Science,
8:24–28, 2016.
5. ACKNOWLEDGMENTS [14] L.-J. Kau, W.-L. Su, P.-J. Yu, and S.-J. Wei. A
This work was partially supported by National Science real-time portable sign language translation system. In
Foundation grants IIS-1055062 and IIS-1565328. Any opin- 2015 IEEE 58th International Midwest Symposium on
ions, findings, and conclusions or recommendations expressed Circuits and Systems (MWSCAS), pages 1–4. IEEE,
in this publication are those of the authors, and do not nec- 2015.
essarily reflect the views of the National Science Foundation. [15] J. Kruskall and M. Liberman. The symmetric time
warping algorithm: From continuous to discrete. time
6. REFERENCES warps, string edits and macromolecules, 1983.
[1] J. Canny. A computational approach to edge [16] H. Lahiani, M. Elleuch, and M. Kherallah. Real time
detection. IEEE Transactions on pattern analysis and hand gesture recognition system for android devices.
machine intelligence, (6):679–698, 1986. In Intelligent Systems Design and Applications
[2] B. Choe, J.-K. Min, and S.-B. Cho. Online gesture (ISDA), 2015 15th International Conference on, pages
recognition for user interface on accelerometer built-in 591–596. IEEE, 2015.
mobile phones. In International Conference on Neural [17] G. Niezen and G. P. Hancke. Gesture recognition as
Information Processing, pages 650–657. Springer, 2010. ubiquitous input for mobile phones. In International
[3] E. Costello. American sign language dictionary. Workshop on Devices that Alter Perception (DAP
Random House Reference &, 2008. 2008), in conjunction with Ubicomp, pages 17–21.
[4] H. Elleuch, A. Wali, A. Samet, and A. M. Alimi. A Citeseer, 2008.
static hand gesture recognition system for real time [18] S. L. Phung, A. Bouzerdoum, and D. Chai. Skin
mobile device monitoring. In Intelligent Systems segmentation using color pixel classification: analysis
Design and Applications (ISDA), 2015 15th and comparison. IEEE transactions on pattern
International Conference on, pages 195–200. IEEE, analysis and machine intelligence, 27(1):148–154,
2015. 2005.
[5] P. Gandhi, D. Dalvi, P. Gaikwad, and S. Khode. [19] L. Prasuhn, Y. Oyamada, Y. Mochizuki, and
Image based sign language recognition on android. H. Ishikawa. A hog-based hand gesture recognition
International Journal of Engineering and Techniques, system on a mobile device. In 2014 IEEE
1(5):55–60, 2015. International Conference on Image Processing (ICIP),
[6] H. P. Gupta, H. S. Chudgar, S. Mukherjee, T. Dutta, pages 3973–3977. IEEE, 2014.
and K. Sharma. A continuous hand gestures [20] C. Preetham, G. Ramakrishnan, S. Kumar, A. Tamse,
recognition technique for human-machine interaction and N. Krishnapura. Hand talk-implementation of a
using accelerometer and gyroscope sensors. IEEE gesture recognizing glove. In India Educators’
Sensors Journal, 16(16):6425–6432, 2016. Conference (TIIEC), 2013 Texas Instruments, pages
328–331. IEEE, 2013.
[21] J. L. Raheja, A. Singhal, and A. Chaudhary. Android
based portable hand sign recognition system. arXiv
preprint arXiv:1503.03614, 2015.
[22] G. A. Rao and P. Kishore. Sign language recognition
system simulated for video captured with smart phone
front camera. International Journal of Electrical and
Computer Engineering (IJECE), 6(5):2176–2187, 2016.
[23] D. E. Rumelhart, G. E. Hinton, and R. J. Williams.
Learning representations by back-propagating errors.
Cognitive modeling, 5(3):1, 1988.
[24] A. Saxena, D. K. Jain, and A. Singhal. Hand gesture
recognition using an android device. In
Communication Systems and Network Technologies
(CSNT), 2014 Fourth International Conference on,
pages 819–822. IEEE, 2014.
[25] A. Saxena, D. K. Jain, and A. Singhal. Sign language
recognition using principal component analysis. In
Communication Systems and Network Technologies
(CSNT), 2014 Fourth International Conference on,
pages 810–813. IEEE, 2014.
[26] M. Seymour and M. Tšoeu. A mobile application for
south african sign language (sasl) recognition. In
AFRICON, 2015, pages 1–5. IEEE, 2015.
[27] P. Viola and M. J. Jones. Robust real-time face
detection. International journal of computer vision,
57(2):137–154, 2004.
[28] H. Wang, A. Stefan, S. Moradi, V. Athitsos, C. Neidle,
and F. Kamangar. A system for large vocabulary sign
search. In European Conference on Computer Vision,
pages 342–353. Springer, 2010.
[29] X. Wang, P. Tarrı́o, E. Metola, A. M. Bernardos, and
J. R. Casar. Gesture recognition using mobile phone’s
inertial sensors. In Distributed Computing and
Artificial Intelligence, pages 173–184. Springer, 2012.
[30] K. S. Warrier, J. K. Sahu, H. Halder, R. Koradiya,
and V. K. Raj. Software based sign language
converter. In Communication and Signal Processing
(ICCSP), 2016 International Conference on, pages
1777–1780. IEEE, 2016.
[31] J. Weston and C. Watkins. Multi-class support vector
machines. Technical report, Citeseer, 1998.
[32] Wikipedia. American manual alphabet,
https://en.wikipedia.org/wiki/american manual alphabet,
2016.

You might also like