A Review of Factors That Impact The Design of A Glove Based Wearable Devices
A Review of Factors That Impact The Design of A Glove Based Wearable Devices
A Review of Factors That Impact The Design of A Glove Based Wearable Devices
Corresponding Author:
Soly Mathew Biju
Faculty of Engineering and Information Sciences, University of Wollongong in Dubai
Dubai Blocks 5, 14 & 15, Dubai Knowledge Park, P.O. Box 20183, Dubai, UAE
Email: [email protected]
1. INTRODUCTION
Living in the modern era of computerized world, where everything is straightforward and basic, a
chunk of this world is losing the advantages this age has to offer [1]. The capacity to talk or the potency of
speech is something we always underestimate. It is one of the best and incredible ways of sharing thoughts
and feelings. It facilitates the communication with other individuals. Deafness is characterized as a level of
hearing loss with the end goal that an individual cannot comprehend speech even when the sound is loud. As
indicated by the World Health Organization (WHO), around 466 million individuals have hearing loss, 34
million of those are kids. The WHO additionally assessed that more than 900 million individuals will have
this disability by 2050. There are a few reasons for hearing loss: hereditary problems, certain contagious
illnesses, complications at birth, chronic ear infections, the utilization of specific drugs, exposure to extreme
noise and ageing. This communication boundary unfavorably influences the lives and social connections of
deaf individuals [2].
Human gestures are an effective and amazing method of communication. These are used to express
a persons’ emotions. A sign language is a language that utilizes manual gestures to pass on important
information, instead of using speech. Nonetheless, there are interaction hindrances between hearing
individuals and deaf people because deaf people will be unable to talk and hear or that hearing people will be
unable to use sign to express themselves. This communication disparity can have an adverse effect on the
lives of the deaf people. Two customary methods of communication between deaf people and hearing people
who do not comprehend gesture-based communication: through mediators or text.
Innovation has diminished this mismatch through procedures that converts gesture-based
communication into speech. These frameworks can be extensively ordered in the sorts of system, that can be
used to change over sign language into speech. These are glove-based framework and vision-based
framework. In glove-based frameworks, an individual sign's while conversing is transferred to the personal
computer (PC). The real time sign is matched with the data set that contains all the signs, that were added at
first to the framework. After comparing with the right sign, the information is passed onto text to sound
converter, where the information is changed over to sound from sign. While the vision-based framework uses
camera to recognize the gestures made by the hand and the body. However, vision-based framework requires
lot of processing on the image such as color segmentation, image filtering and boundary detection.
This study tries to review the sign language gloves by doing a comprehensive assessment and
evaluation based on a comprehensive synthesis of sensor glove papers. We present this thorough review that
highlights the important achievements, sensor positioning, accuracy, limitations, methodology used, time
taken to process and stress the challenges and prospects for this developing area of research.
2. METHOD
This study offers a methodical analysis of literature centered on the smart gloves for mute
individuals. Several evolving ideas have been considered to enhance an insight of advances in the
information on this crucial issue. This study tries to connect this disparity in the literature by doing a
comprehensive assessment and evaluation based on a comprehensive synthesis of sensor glove associated
research published. The following are a synopsis of the key opinions evolving from this literature assessment,
− Sensors used
− Sensor positioning
− Sign language
− Accuracy and efficiency of the system
− Time taken to process
− Limitations
− Methodology used
This paper categorized relevant articles by using an approach called keyword search. Various
keywords were found and explored on IEEE explore. The main aim of this research is to understand the
current research level on sensor gloves that can be used to interpret sign language. After rummaging through
IEEE explore, 23 editorials were found that met the benchmarks for this analysis. All the articles were
comprehensively reviewed by the authors to discover familiar factors. These factors were associated to find
the differences in each article. The key aim of the article was also uncovered when processing these articles.
The results section is divided into 3 sections that are aim of articles, sensors, and feature comparison.
In aim of articles, the major goal of the papers is discussed. While in sensors, comparison is done between the
sensors used in these gloves and what impact that has on the result. And finally, in feature comparison, analysis
is done to differentiate the differences in the papers that are achieving the same outcome.
A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
524 ISSN: 2252-8938
Australian sign language (AUSLAN), British sign language (BSL), New Zealand sign language (NZSL),
Indian sign language (ISL), deaf blind sign language (DBSL), Czech sign language (CSL), British, Australian
and New Zealand sign language (BANZSL) and standard manual sign language (SMSL).
The authors in [15] and [16] used sEMG to get the data, which was processed for segmentation.
After segmentation, features were extracted. This was done for the inertial sensors and static sensors. Then
both these features were cascaded to select the best subset of the feature. After determining the subset, it was
passed through classification model to recognize the gesture.
In [17] and [18] features were extracted from the sensors. These features were passed through a
support vector machines (SVM) based classifier to recognize the gesture made. In [19] and [20] gesture
recognition algorithm is devised which checks for contact sensor if they provide a similar gesture. if the
gesture is not found, the code looks for any resemblance regarding the flex sensors. These two stages can
determine almost all the gesture except the dynamic ones. To determine the dynamic gestures the inertial
sensor is used. In [21] and [22] no dynamic gestures were recognized as the glove could only recognize static
gestures. To process the gesture, three layers of nodes were used. The first layer hard 7 nodes which were the
values from the seven sensors. These 7 nodes were converted into 52 nodes by applying weight on them. The
final layer had 26 nodes; each node denotes to an alphabet. The authors in [23]–[31] and [32] stored the
sensor values into a file which was loaded into a LabView program. This program receives the values from
the file and matches this data with gestures close to American sign language (ASL) gestures. In article [33]–
[36] and [37], the gestures for each letter were processed by storing the voltage levels of the sensors into the
database. To determine a gesture, the program compares the sensor values to the ones in the database.
In [38] and [39], a learning mode is used to train the system. the system stores values from the gloves
in a comma-separated values (CSV) file 20 times. These readings are used to evaluate the range of each gesture
for this specific user. Whenever the user starts communicating using the glove, the program uses the sensor
values and look for the value of the gesture made by the user in the CSV file. The authors in [40] and [41] used
an SVM base classifier. The sensor values for each gesture are used to train the SVM classifier. The model
generated is used to predict the gesture made. In [42], [43] and [44], online gesture recognition model is
proposed which consists of two parts: segmentation with threshold-based and classification with a probabilistic
model. The system determines the sensor values and uses the threshold to determine which gesture it is. To
validate the gesture probabilistic model is used, a very inefficient method was used in [35], [45] and [46], where
the gestures were recorded manually. The user has to make the gesture and select the alphabet. This solution
will not detect the gesture if there is a slight difference in the sensor readings. In [47]–[49] and [50] the sensor
values were obtained and passed through multiple algorithms to evaluate the accuracy. The glove was able to
achieve more accuracy than previous experiments. Figure 1 shows the overall design for sign language
recognition for most systems. Figure 1 shows the overall design for sign language recognition for most systems.
The articles presented in this review have been summarized in Table 1 (APPENDIX).
4. CONCLUSION
This review was centered on the characteristics of the Glove based wearable devices, it uncovered
numerous concepts and how they are utilized. The reach and complexity of the articles highlights the
algorithms and sensors used to develop these gloves. Most of the gloves did not have an accelerometer to get
the dynamic gestures. They have used different methodology. However, this methodology of using flex to
identify dynamic gestures could produce ambiguous results. Some of the gloves could not process data in
real-time, which could be a real drawback as the application is to talk in real-time. Some gloves required
training to be used, which is a problem when the glove is constantly switched between multiple users.
Various articles did not give adequate information about their test, test plan and accuracy of the system.
Certain gloves had a delay in reading the gestures, so the hand should stay still for a short period of time
before moving to the next gesture to allow for the system to interpret it. While some places delay was
introduced to process the data. This is very inconvenient as the communication speed would be quite slow. In
terms of accuracy, half of the papers reviewed achieved an accuracy of 90% or more. While six of them had
accuracy of 70% to 90%. Some paper had multiple accuracies for different algorithm or design. One of the
major factors in getting better accuracy was use of either contact sensors or accelerometer to get dynamic
gestures. The other factor that influenced accuracy was the algorithm used. The algorithms with higher
accuracy were usually the ones with machine learning algorithm or using different algorithms like k-nearest
neighbors (k-NN), segmentation and gesture recognition. The ones with the lower accuracies were the one
which used raw values to determine the gesture. Another factor was the hand size that could have an impact
on the accuracy. If the size of the hand is big, it would mean that flex sensors would bend properly. Proper
bending of flex sensors would create a higher range of values which would increase the accuracy. Moreover,
some of the papers were not able to determine the accuracy or did not mention it in the paper. Future
improvements can be done on this topic, by designing the glove based on the user to yield better results or
compare different hand sizes to know the level of inaccuracy. This could be helpful in determining whether
glove will work with others or not. Use of different sensors could be another improvement, this could help in
making a more flexible glove.
APPENDIX
A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
526 ISSN: 2252-8938
A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
528 ISSN: 2252-8938
REFERENCES
[1] G. Kumar, M. K. Gurjar, and S. B. Singh, “American sign language translating glove using flex sensor,” Imperial journal of
interdisciplinary research, no. 6, pp. 1439–1441, 2016.
[2] S. Bin Rizwan, M. S. Z. Khan, and M. Imran, “American sign language translation via smart wearable glove technology,” RAEE
2019-International Symposium on Recent Advances in Electrical Engineering, 2019, doi: 10.1109/RAEE.2019.8886931.
[3] K. S. Abhishek, L. C. F. Qubeley, and D. Ho, “Glove-based hand gesture recognition sign language translator using capacitive
touch sensor,” 2016 IEEE International Conference on Electron Devices and Solid-State Circuits, EDSSC 2016, pp. 334–337,
2016, doi: 10.1109/EDSSC.2016.7785276.
[4] M. Elmahgiubi, M. Ennajar, N. Drawil, and M. S. Elbuni, “Sign language translator and gesture recognition,” GSCIT 2015-Global
Summit on Computer and Information Technology-Proceedings, 2015, doi: 10.1109/GSCIT.2015.7353332.
[5] R. Ambar, C. K. Fai, M. H. Abd Wahab, M. M. Abdul Jamil, and A. A. Ma’Radzi, “Development of a wearable device for sign
language recognition,” Journal of Physics: Conference Series, vol. 1019, no. 1, 2018, doi: 10.1088/1742-6596/1019/1/012017.
[6] K. N. Tarchanidis and J. N. Lygouras, “Data glove with a force sensor,” IEEE Transactions on Instrumentation and
Measurement, vol. 52, no. 3, pp. 984–989, 2003, doi: 10.1109/TIM.2003.809484.
[7] J. Wang and Z. Ting, “An ARM-based embedded gesture recognition system using a data glove,” 26th Chinese Control and
Decision Conference, CCDC 2014, pp. 1580–1584, 2014, doi: 10.1109/CCDC.2014.6852419.
[8] R. H. Liang and M. Ouhyoung, “A real-time continuous gesture recognition system for sign language,” Proceedings-3rd IEEE
International Conference on Automatic Face and Gesture Recognition, FG 1998, pp. 558–567, 1998, doi:
10.1109/AFGR.1998.671007.
[9] Y. Khambaty et al., “Cost effective portable system for sign language gesture recognition,” 2008 IEEE International Conference
on System of Systems Engineering, SoSE 2008, 2008, doi: 10.1109/SYSOSE.2008.4724149.
[10] A. Natesh, G. Rajan, B. Thiagarajan, and V. Vijayaraghavan, “Low-cost wireless intelligent two hand gesture recognition
system,” 11th Annual IEEE International Systems Conference, SysCon 2017-Proceedings, 2017, doi:
10.1109/SYSCON.2017.7934745.
[11] J. L. Hernandez-Rebollar, R. W. Lindeman, and N. Kyriakopoulos, “A multi-class pattern recognition system for practical finger
spelling translation,” Proceedings-4th IEEE International Conference on Multimodal Interfaces, ICMI 2002, pp. 185–190, 2002,
doi: 10.1109/ICMI.2002.1166990.
[12] J. Wu, W. Gao, Y. Song, W. Liu, and B. Pang, “A Simple sign language recognition system based on data glove,” International
Conference on Signal Processing Proceedings, ICSP, vol. 2, pp. 1257–1260, 1998, doi: 10.1109/icosp.1998.770847.
[13] N. Tanyawiwat and S. Thiemjarus, “Design of an assistive communication glove using combined sensory channels,”
Proceedings-BSN 2012: 9th International Workshop on Wearable and Implantable Body Sensor Networks, pp. 34–39, 2012, doi:
10.1109/BSN.2012.17.
[14] C. Oz and M. C. Leu, “Recognition of finger spelling of american sign language with artificial neural network using
position/orientation sensors and data glove,” Lecture Notes in Computer Science, vol. 3497, no. II, pp. 157–164, 2005, doi:
10.1007/11427445_25.
A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)
530 ISSN: 2252-8938
[15] J. Wu, L. Sun, and R. Jafari, “A wearable system for recognizing American sign language in real-time using IMU and surface
EMG sensors,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 5, pp. 1281–1290, 2016, doi:
10.1109/JBHI.2016.2598302.
[16] C. Oz and M. C. Leu, “American sign language word recognition with a sensory glove using artificial neural networks,”
Engineering Applications of Artificial Intelligence, vol. 24, no. 7, pp. 1204–1213, 2011, doi: 10.1016/j.engappai.2011.06.015.
[17] B. G. Lee and S. M. Lee, “Smart wearable hand device for sign language interpretation system with sensors fusion,” IEEE
Sensors Journal, vol. 18, no. 3, pp. 1224–1232, 2018, doi: 10.1109/JSEN.2017.2779466.
[18] N. Sriram and M. Nithiyanandham, “A hand gesture recognition based communication system for silent speakers,” 2013
International Conference on Human Computer Interactions, ICHCI 2013, 2013, doi: 10.1109/ICHCI-IEEE.2013.6887815.
[19] S. S. Ahmed, H. Gokul, P. Suresh, and V. Vijayaraghavan, “Low-cost wearable gesture recognition system with minimal user
calibration for asl,” Proceedings-2019 IEEE International Congress on Cybermatics: 12th IEEE International Conference on
Internet of Things, 15th IEEE International Conference on Green Computing and Communications, 12th IEEE International
Conference on Cyber, Physical and So, pp. 1080–1087, 2019, doi: 10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00185.
[20] H. Kaur and J. Rani, “A review: Study of various techniques of Hand gesture recognition,” 1st IEEE International Conference on
Power Electronics, Intelligent Control and Energy Systems, ICPEICES 2016, 2017, doi: 10.1109/ICPEICES.2016.7853514.
[21] S. A. Mehdi and Y. N. Khan, “Sign language recognition using sensor gloves,” ICONIP 2002-Proceedings of the 9th
International Conference on Neural Information Processing: Computational Intelligence for the E-Age, vol. 5, pp. 2204–2206,
2002, doi: 10.1109/ICONIP.2002.1201884.
[22] M. I. Saleem, P. Otero, S. Noor, and R. Aftab, “Full duplex smart system for deaf dumb and normal people,” 2020 Global
Conference on Wireless and Optical Technologies, GCWOT 2020, 2020, doi: 10.1109/GCWOT49901.2020.9391593.
[23] J. M. Allen, P. K. Asselin, and R. Foulds, “American sign language finger spelling recognition system,” Proceedings of the IEEE
Annual Northeast Bioengineering Conference, NEBEC, pp. 285–286, 2003, doi: 10.1109/nebc.2003.1216106.
[24] S. Kumuda and P. K. Mane, “Smart sssistant for deaf and dumb using flexible resistive sensor : Implemented on LabVIEW
platform,” Proceedings of the 5th International Conference on Inventive Computation Technologies, ICICT 2020, pp. 994–1000,
2020, doi: 10.1109/ICICT48043.2020.9112553.
[25] A. Arif, S. T. H. Rizvi, I. Jawaid, M. A. Waleed, and M. R. Shakeel, “Techno-talk: An American sign language (ASL) translator,”
International Conference on Control, Decision and Information Technologies, CoDIT 2016, pp. 665–670, 2016, doi:
10.1109/CoDIT.2016.7593642.
[26] K. S. and V. M. H. Joshi, S. Bhati, “Detection of finger motion using flex sensor for assisting speech impaired,” IJIRSET, 2017,
doi: 10.15680/IJIRSET.2017.0610172.
[27] G. Sabaresh and A. Karthi, “Design and implementation of a sign-to-speech/text system for deaf and dumb people,” IEEE
International Conference on Power, Control, Signals and Instrumentation Engineering, ICPCSI 2017, pp. 1840–1844, 2018, doi:
10.1109/ICPCSI.2017.8392033.
[28] J. M. Rehg and T. Kanade, “Digiteyes: vision-based hand tracking for human-computer interaction,” Motion of Non-Rigid and
Articulated Obgects Workshop, Proceedings, pp. 16–22, 1994, doi: 10.1109/mnrao.1994.346260.
[29] S. S. Patil, “Sign language interpreter for deaf and dumb people,” International Journal for Research in Applied Science and
Engineering Technology, vol. 7, no. 9, pp. 354–359, 2019, doi: 10.22214/ijraset.2019.9050.
[30] L. K. Simone, E. Elovic, U. Kalambur, and D. Kamper, “A low cost method to measure finger flexion in individuals with reduced
hand and finger range of motion,” Annual International Conference of the IEEE Engineering in Medicine and Biology-
Proceedings, vol. 26 VII, pp. 4791–4794, 2004, doi: 10.1109/iembs.2004.1404326.
[31] R. Gentner and J. Classen, “Development and evaluation of a low-cost sensor glove for assessment of human finger movements in
neurophysiological settings,” Journal of Neuroscience Methods, vol. 178, no. 1, pp. 138–147, 2009, doi:
10.1016/j.jneumeth.2008.11.005.
[32] H. Brashear, V. Henderson, K. H. Park, H. Hamilton, S. Lee, and T. Starner, “American sign language recognition in game
development for deaf children,” Eighth International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS
2006, vol. 2006, pp. 79–86, 2006, doi: 10.1145/1168987.1169002.
[33] V. Pathak, S. Mongia, and G. Chitranshi, “A framework for hand gesture recognition based on fusion of flex, contact and
accelerometer sensor,” Proceedings of 2015 3rd International Conference on Image Information Processing, ICIIP 2015, pp.
312–319, 2016, doi: 10.1109/ICIIP.2015.7414787.
[34] D. Bajpai, U. Porov, G. Srivastav, and N. Sachan, “Two way wireless data communication and American sign language translator
glove for images text and speech display on mobile phone,” Proceedings-2015 5th International Conference on Communication
Systems and Network Technologies, CSNT 2015, pp. 578–585, 2015, doi: 10.1109/CSNT.2015.121.
[35] K. Kadam, R. Ganu, A. Bhosekar, and S. D. Joshi, “American sign language interpreter,” Proceedings - 2012 IEEE 4th
International Conference on Technology for Education, T4E 2012, pp. 157–159, 2012, doi: 10.1109/T4E.2012.45.
[36] Z. Lu, X. Chen, Q. Li, X. Zhang, and P. Zhou, “A hand gesture recognition framework and wearable gesture-based interaction
prototype for mobile devices,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 2, pp. 293–299, 2014, doi:
10.1109/THMS.2014.2302794.
[37] R. McKee and J. Napier, “Interpreting into international sign pidgin,” Sign Language & Linguistics, vol. 5, no. 1, pp. 27–54,
2002, doi: 10.1075/sll.5.1.04mck.
[38] S. A. E. El-Din and M. A. A. El-Ghany, “Sign language interpreter system: An alternative system for machine learning,” 2nd
Novel Intelligent and Leading Emerging Sciences Conference, NILES 2020, pp. 332–337, 2020, doi:
10.1109/NILES50944.2020.9257958.
[39] M. A. Ahmed, B. B. Zaidan, A. A. Zaidan, M. M. Salih, and M. M. Bin Lakulu, “A review on systems-based sensory gloves for
sign language recognition state of the art between 2007 and 2017,” Sensors (Switzerland), vol. 18, no. 7, 2018, doi:
10.3390/s18072208.
[40] M. M. Chandra, S. Rajkumar, and L. S. Kumar, “Sign languages to speech conversion prototype using the SVM classifier,” IEEE
Region 10 Annual International Conference, Proceedings/TENCON, vol. 2019-Octob, pp. 1803–1807, 2019, doi:
10.1109/TENCON.2019.8929356.
[41] P. Vijayalakshmi and M. Aarthi, “Sign language to speech conversion,” 2016 International Conference on Recent Trends in
Information Technology, ICRTIT 2016, 2016, doi: 10.1109/ICRTIT.2016.7569545.
[42] S. Vutinuntakasame, V. R. Jaijongrak, and S. Thiemjarus, “An assistive body sensor network glove for speech-and hearing-
impaired disabilities,” Proceedings-2011 International Conference on Body Sensor Networks, BSN 2011, pp. 7–12, 2011, doi:
10.1109/BSN.2011.13.
[43] K. Kania and U. Markowska-Kaczmar, “American sign language fingerspelling recognition using wide residual networks,”
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 10841 LNAI, pp. 97–107, 2018, doi: 10.1007/978-3-319-91253-0_10.
[44] K. W. Kim, M. S. Lee, B. R. Soon, M. H. Ryu, and J. N. Kim, “Recognition of sign language with an inertial sensor-based data
glove,” Technology and Health Care, vol. 24, no. s1, pp. S223–S230, 2015, doi: 10.3233/THC-151078.
[45] A. Gul, B. Zehra, S. Shah, N. Javed, and M. I. Saleem, “Two-way smart communication system for deaf dumb and normal
people,” ICISCT 2020-2nd International Conference on Information Science and Communication Technology, 2020, doi:
10.1109/ICISCT49550.2020.9080028.
[46] X. Cai, T. Guo, X. Wu, and H. Sun, “Gesture recognition method based on wireless data glove with sensors,” Sensor Letters, vol.
13, no. 2, pp. 134–137, 2015, doi: 10.1166/sl.2015.3454.
[47] M. S. Amin, M. T. Amin, M. Y. Latif, A. A. Jathol, N. Ahmed, and M. I. N. Tarar, “Alphabetical gesture recognition of American
sign language using E-voice smart glove,” Proceedings - 2020 23rd IEEE International Multi-Topic Conference, INMIC 2020,
2020, doi: 10.1109/INMIC50486.2020.9318185.
[48] S. K. Verma, A. P. J. Abdul, R. Kesarwani, and G. Kaur, “HANDTALK : Interpreter for the differently abled: A Review,”
International Journal of Innovative Research and Creative Technology www.ijirct.org, vol. 1, no. 4, pp. 402–404, 1201.
[49] P. Sharma and N. Sharma, “Gesture recognition system,” Proceedings-2019 4th International Conference on Internet of Things:
Smart Innovation and Usages, IoT-SIU 2019, 2019, doi: 10.1109/IoT-SIU.2019.8777487.
[50] L. Yin, M. Dong, Y. Duan, W. Deng, K. Zhao, and J. Guo, “A high-performance training-free approach for hand gesture
recognition with accelerometer,” Multimedia Tools and Applications, vol. 72, no. 1, pp. 843–864, 2014, doi: 10.1007/s11042-013-
1368-1.
BIOGRAPHIES OF AUTHORS
Dr. Obada Al-Khatib received the B.Sc. degree (Hons.) in electrical engineering
from Qatar University, Qatar, in 2006, the M.Eng. degree (Hons.) in communication and
computer from the National University of Malaysia, Bangi, Malaysia, in 2010, and the Ph.D.
degree in electrical and information engineering from The University of Sydney, Sydney,
Australia, in 2015. From 2006 to 2009, he was an Electrical Engineer with Consolidated
Contractors International Company, Qatar. In 2015, he joined the Centre for IoT and
Telecommunications, The University of Sydney, as a Research Associate. Since 2016, he has
been an Assistant Professor with the Faculty of Engineering and Information Sciences,
University of Wollongong in Dubai, UAE. His current research interests include machine
learning for 5G and beyond, IoT, vehicular networks, green networks, and cloud/edge
computing. He can be contacted at email: [email protected].
Hashir Zahid Sheikh received the B.E. degree in computer engineering from
University of Wollongong in Dubai, Dubai, United Arab Emirates, in 2019. He is a computer
engineer at AppsPro. Current research interests’ robotic systems, machine learning, IoT and
communication. Previously worked on Implementation of a sustainable and scalable vertical
micro-farm. He can be contacted at email: [email protected].
A review of factors that impact the design of a glove based wearable devices (Soly Mathew Biju)