Retona 2019

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Development of a Speech-Assistive Device

Integrated in an Android Mobile Application for


Individuals with Incomplete Locked-In Syndrome
Alex M. Retona, Angio Gabriel D. Santos, Paolo Miguel C. Villegas, Rosula S.J. Reyes, Ph. D.
Department of Electronics, Computer, and Communications Engineering
Ateneo de Manila University
Quezon City, Philippines
[email protected], [email protected], [email protected], [email protected]

Abstract—Bridging the communication gap between speech- (ACAT) by Intel used by the renowned scientist Stephen
impaired individuals who are also physically-disabled is essential Hawking who was diagnosed with ALS [1].
to maintain their social integration in the society. With this, a However, not all hospitals, especially those in rural areas,
speech-assistive device integrated in an Android mobile are equipped with the technology for providing adequate
application was developed. An integrated sensor box was designed
treatments to speech impairments, leaving the problem of
to interface five sensors capable of detecting physical movements.
These movements are then translated into digitals signals and accessibility of AAC devices in the country [2].
transmitted to the developed Android mobile application using From here, there is a motivation to design AAC devices
Bluetooth communication. This allows the user to communicate such that it is accessible most especially to those who are
through the features of the mobile application such as text-to- severely-disabled and speech-impaired individuals. Since
speech and pictograms using a single-input interface. From here, mobile phones are becoming prevalent in applications in the
a case study was conducted on two individuals who both suffered biomedical field, a sensor device which utilizes the available
from stroke, inhibiting their speech and physical movements. movement controllable by the individual can be integrated into
Their limited but controllable movements such as blinking, and mobile applications. In being able to provide more accessibility
finger gestures were detected. The results also showed the
and expanding the scope of AAC devices, this will boost the
limitation of the system as it requires practice to effectively use the
system. As such, the developed speech-assistive system can be a independence and morale of disabled individuals and bring
potential form of physical and mental rehabilitation as a learning forward a social closeness through communication in daily life.
curve can be developed when using the mobile application. In As such, with considerations of the contextual situation in
conclusion, this study demonstrates an accessible and flexible the Philippines as well as the current speech-assistive
speech-assistive system for disabled individuals to improve their technologies, the main objectives of the study are the following:
mental health and social relationships with others.
• Design a sensor circuit that classifies and translates the
Keywords—Speech-Assistive Technology, Android Mobile available movements into digital signals that is
Application, Incomplete Locked-In Syndrome, Pictograms, Arduino accomplished by individuals with incomplete locked-in
syndrome
I. INTRODUCTION • Develop a mobile application that interfaces with the
In the contextual situation of the Philippines, many people sensor circuit and has the following features:
are diagnosed with severe impairments that not only prevents
them from fulfilling their basic day-to-day functions but also  Mobile keyboard access using the single input
leads to a variety of mental and psychological problems as well. interface from the sensor circuit
This is because social relationships play a big role in the upkeep  Text-to-speech interface
of mental health and wellbeing of persons, including those who  Pictograms that converts pictures directly into
have disabilities. As such, it is important to provide avenues for speech-dialogues
individuals with Incomplete Locked-in Syndrome (LIS) or • Evaluate the effectiveness of the system by conducting a
those who are speech-impaired and physically-disabled case-study on individuals who are diagnosed with
individuals to communicate with other people in order to disorders in the category of incomplete locked-in
maintain their social integration within society. syndrome such as stroke
Given this, there are currently speech-assistive
technologies, also known as Augmentative and Alternative II. REVIEW OF RELATED LITERATURE
Communication (AAC) devices, which are essential tools of A. Technologies using Assistive Context-Aware Toolkit
communication that help individuals with LIS to interact
Assistive Context-Aware Toolkit (ACAT) is an open source
independently with other people. A significant example of this
software designed to allow people with motor neuron diseases
AAC technology is the Assistive Context-Aware Toolkit
and other disabilities to have full access of the features of

c
978-1-7281-1895-6/19/$31.00 2019 IEEE 702
computer terminals that use Windows Operating Systems (OS)
[3]. ACAT is intended to be used by developers and researchers
to develop systems that would help the severely disabled access
their computers and communicate to people around them.
Through ACAT, a person with ALS or similar disabilities can
operate regular Windows OS computers where it comes with a
virtual keyboard, similar to mobile phones, that comes with a
text-to-speech synthesizer.
A variety of sensors devices can be utilized as the actuator
for ACAT. One example is Stephen Hawking's case where
infrared sensors were installed onto his eyeglasses, and
movements of his cheeks can be detected using this sensor. Fig. 2. Block diagram of the developed speech-assistive technology system
Another study [4] shows that a pacifier which functioned as a
Morse code controller, acted as the actuator for ACAT. On the First, in the input stage, multiple sensors were utilized to
other hand, a research study utilized various sensor devices as ensure the flexibility of the system in accommodating various
shown in Fig. 1 that were used as input devices to the ACAT physical movements that can be accomplished by the disabled
[5]. From here, various sensors that are able to detect movement individual. As shown in Fig. 3, the sensors chosen in this study
can be utilized as a tool to a speech-assistive system. are the following: button switch, touch sensor, MPU6050
accelerometer, flex sensor, and Electromyography (EMG)
MyoWare sensor. These sensors were interfaced into a sensor
circuit box by constructing an analog circuit that automatically
detects the sensor being used.

Fig. 1. Various sensors were utilized as input devices for ACAT [5]

B. Pictogram Approach in Mobile Application


Different studies demonstrated the effectiveness of using
pictograms as a way of communication for people with
intellectual disabilities [6, 7]. AbleChat, which is an Android Fig. 3. Five sensors used in the system: Accelerometer, Button Switch, Touch
mobile application makes use of pictograms in this specific type Sensor, EMG MyoWare Sensor and Flex Sensor
of application [8]. Based from this study, AbleChat has features
which allow users to chat using pictograms without the need of Additional peripherals are also used as shown in Fig. 4. The
typing the specific words to deliver their messages. hand-glove with Velcro is used to easily attach the flex sensor
Through AbleChat, disabled individuals were able to and accelerometer to the hand of the user. On the other hand,
communicate with other people with ease. Meanwhile, from the electrodes are placed on the eyebrow muscle (frontalis muscle)
perspective of the caretakers, AbleChat is indispensable or eye blinking muscle (orbicularis oculi muscle) which are
because it enables the users to become more independent, self- connected using electrode snap connectors.
confident and more secure by providing a means of
communication [8]. From here, the evaluation of AbleChat
show that communication using pictograms has a high potential
for those that are physically impaired, thus the need for further
development in this area.
III. SYSTEM METHODOLOGY
The block diagram of the developed speech-assistive
technology system is illustrated in Fig. 2 wherein this is divided
into five stages: input, interfacing, data processing, data Fig. 4. Hand-glove with Velcro and Electrode Snap Connectors
communications, and application stage.

2019 IEEE Region 10 Conference (TENCON 2019) 703


The five sensors were assigned to different physical data collection, the sampling rate for each sensor was checked
movements as tabulated in Table I in which each sensor can be as shown in Table II. From here, the Arduino can accurately
only used one at a time in using the system. As such, the identify on when the physical movement is done by the user.
specified movement will be primary function to control the
system. From here, it must be noted that these sensors are not TABLE II. DATA SAMPLING RATE OF EACH SENSOR IN ARDUINO
limited to these movements as the system aims to accommodate Sampling Rate Sampling Rate
any significant and controllable movement of the user. (time per sample) (samples per second)
Touch 11ms-12ms 83.3Hz-90.9Hz
TABLE I. ASSIGNMENT OF SENSORS TO DIFFERENT PHYSICAL MOVEMENTS
Button 11ms-12ms 83.3Hz-90.9Hz
Type of Flex 14ms-16ms 62.5Hz-71.43Hz
Touch Button Flex Accelerometer EMG
Movement Accelerometer 17ms-18ms 55.5Hz-58.8Hz
Finger EMG Sensor 14ms-16ms 62.5Hz-71.43Hz
Flexion
   
Finger In the data communications stage, for each physical
Extension
  movement detected from the Arduino, a corresponding digital
Finger signal is sent to the mobile application using Bluetooth
Abduction
 communication via the HC-06 module as shown in Fig. 6.
Finger
Adduction

Eyebrow
Movement

Blinking  Fig. 6. HC-06 Bluetooth Module.

For the interfacing stage, an analog circuit was constructed For the application stage, an Android mobile application
to connect all five sensors to an Arduino microcontroller which was developed using the Android Studio software. In the initial
processes all sensor signals. The sensor circuit box containing launch of the mobile application, the user goes through a
the analog circuit and Arduino is shown in Fig. 5 wherein each required series of steps to properly use the system. These steps
sensor is connected through its corresponding input jack as include Bluetooth connection, sensor detection, and calibration.
shown. After which, the user may access the main features of the
mobile application as shown in Fig. 7. By showing a series of
flashing boxes, the user can control the mobile application
through a single-input interface wherein the user may perform
the physical movement to choose the desired highlighted box.
This allows the user to type sentences and convert them to text-
to-speech and use the set of pictograms in which each image
corresponds to a specific phrase for quick and easy
communication.

Fig. 5. Sensor circuot box integrating all sensors into the Arduino

In the data processing stage, three main algorithms were


implemented in the Arduino: filtering, calibration and
thresholding. First, a moving average filter was implemented as
a low-pass filter to remove high-frequency noises. Second, a
calibration algorithm was also implemented to set a
thresholding value that detects the physical movement being
done by the user. This can be achieved by doing three repeated
physical movements and averaging the peak signal peaks
multiplied to a set sensitivity value. To ensure the validity of Fig. 7. Main interface of the Android mobile application

704 2019 IEEE Region 10 Conference (TENCON 2019)


The set of pictograms used in this system are shown in Table A. First Case Study
III in which these are commonly-used dialogues of disabled In the initial testing of the system, the movements of interest
individuals. When the user chooses the specific pictogram, the as specified in Table I were asked to be performed by the first
mobile application will directly convert it into audible speech. participant. Significant and controllable movements were
observed when thumb flexion was done by the user. As such,
TABLE III. PICTOGRAMS AND ITS CORRESPONDING MESSAGE
the button switch was mainly used by the participant. This is
Pictogram Message Pictogram Message demonstrated in Fig. 9 wherein the button switch was properly
placed to a position where the user can press the button switch
Yes. No. with ease. The participant was able to type simple phrases such
as “hello” and convert them into speech in the mobile
application with the help of the device. The corresponding
Hello! Good Bye! signal response of the user was recorded as shown in Fig. 10 to
visualize the accuracy of system in detecting each movement.
Good Furthermore, the participant was also able to use the flex sensor,
I love you!
Morning! touch sensor, and accelerometer to detect finger bending or
finger extension and flexion.
Good Night! I’m sleepy.

I’m hungry. I’m thirsty.

I want to Aircon,
watch TV. please.

Thank you!

Finally, to visualize the signal readings from each sensor, the


data is collected real-time through the serial monitor of the
Arduino. Furthermore, the digital signal received in the mobile
application is also monitored through Android Studio. This
setup is shown in Fig. 8. This was done to detect errors during
signal reading and to make necessary revisions to the
calibration parameters of the sensors. Fig. 9. Left: Participant using the mobile application through the button
switch; Right: In-detail visualization of the position of the button switch to
detect thumb flexion of the participant.

Fig. 8. Visualization of Data through Arduino and Android Studio

IV. CASE STUDY TESTING Fig. 10.Signal response in using button sensor.

A case study was conducted on two individuals with From here, successful signal readings were accurately read
incomplete LIS to verify the effectiveness of the system in by the sensor circuit box in which the participant was able to
terms of the accuracy in detecting the physical movements and use the mobile application with an accurate response time in
the ease of the user in maneuvering through the mobile choosing the proper letters and pictograms. The participant was
application’s features. As such, three sessions were allotted to able to construct phrases such as “hi” and “hello” and choosing
test the system on two participants who suffered from stroke, pictograms while being guided accordingly to match his
inhibiting their physical movements and speech. response time to the mobile application. However, based from

2019 IEEE Region 10 Conference (TENCON 2019) 705


observations during testing, the participant still lacked the V. CONCLUSION
mental response in performing physical movement
independently. As such, guidance was required in helping the A speech-assistive system was successfully designed and
participant use the features of the mobile application to implemented. The system comprises of three major
communicate. components: five sensors, a sensor circuit box and an android
application. The sensor circuit box is composed of an Arduino,
B. Second Case Study a Bluetooth module, and an analog circuit that integrates all the
On the other hand, the same procedure was done for the sensors. This sensor circuit box communicates with the
second participant. In the three sessions of case study, the developed android mobile application via Bluetooth. By doing
participant had difficulty in doing significant finger gestures as significant and controllable movements, the user controls the
there is only limited movement that was achieved. However, mobile application with a single-input interface to select
significant control in eye blinking was observed. As such, as pictograms, construct sentences and phrases, and convert them
shown in Fig. 11, the participant used the EMG sensor to into audible speech. In testing the system through a case study,
control the mobile application. However, due to slow response results show that their limited but controllable movements
time of the participant at around two to three seconds, the proper allow them to control the mobile application given that they
calibration of the sensors was not achieved. This negatively have the cognitive ability to do so. As such, enough practice in
affected the movement detection algorithm in which the using the system is necessary to have adequate response time
participant was not able to use the mobile application when doing controllable physical movements. Furthermore,
effectively. this speech-assistive device can also serve as a means of
rehabilitating physical and mental functions of a disabled
individual. In conclusion, this system is a possible way of
connecting the speech-impaired and physically-disabled
individuals to society and thus may bring a positive impact to
their mental health and social relationships.
REFERENCES
[1] "The technology that gave Stephen Hawking a voice should be accessible
to all who need it", The Conversation, 2018. [Online]. Available:
http://theconversation.com/the-technology-that-gave-stephen-hawking-
a-voice-should-be-accessible-to-all-who-need-it-93418.
[2] J. Navarro, A. Baroque, J. Lokin and N. Venketasubramanian, "The real
stroke burden in the Philippines", International Journal of Stroke, vol. 9,
no. 5, pp. 640-641, 2014.
[3] "Assistive Context-Aware Toolkit (ACAT)", 01.org, 2018. [Online].
Fig. 11.Participant using the EMG MyoWare sensor to detect blinking
Available: https://01.org/acat.
[4] C. Wu, Y. Chen, J. Chin and S. Chen, "The developments of
Based from the results, enough practice is necessary when communication agent and home appliance automation agent based on
using the developed speech-assistive system as there is an Intel ACAT", 2018 IEEE International Conference on Applied System
observable learning curve in the effective use of the system. Innovation, 2018.
First, controllable movement from the participant is needed to [5] C. Wu, Y. Chen, and S. Chen, " The assistive input devices with Intel
properly use the system. Movements that are varying in strength ACAT system for the severe disabled", 2018 IEEE International
Conference on Applied System Innovation, 2018.
or uncontrollable movements such as tremors may cause
[6] J. Munemori, T. Fukuda, M. Mohd Yatid and J. Itou, "The pictograph chat
problems for the system. Second, if the response time is slow,
communicator II", Lecture Notes in Computer Science.
there is a tendency for individual to miss the intended letters or
[7] K. Wok, A. Wok and W. Glinkowski, "A cross-lingual mobile medical
pictograms, making it difficult to use the system. communication system prototype for foreigners and subjects with speech,
Given these results, the system can be a potential form of hearing, and mental disabilities based on pictograms", Computational and
both physical and mental rehabilitation. Physical rehabilitation Mathematical Methods in Medicine, vol. 2017, pp. 1-9, 2017.
can be seen in the repeated movement of the fingers or facial [8] J. Daems, N. Bosch, S. Solberg, J. Dekelver and M. Kultsova, “AbleChat:
muscles, which can strengthen overtime when using the Development of a chat app with pictograms for people with intellectual
disabilities”, Proceedings of the 2016 Conference on
application. Meanwhile, mental rehabilitation can be seen in Engineering4Society, 2016.
constructing words, or simply, even choosing a row to select his
intended letter or pictogram.

706 2019 IEEE Region 10 Conference (TENCON 2019)

You might also like