Tharshigan Logeswaran, Krishnarajah Bhargavan, Ginthozan Varnakulasingam, Ganasampanthan Tizekesan, Dhammika de Silva

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Blind Assistant
Tharshigan Logeswaran Krishnarajah Bhargavan
Sri Lanka Institute of Information Technology Sri Lanka Institute of Information Technology
Department of Information Technology Department of Information Technology
Malabe, Sri Lanka Malabe, Sri Lanka

Ginthozan Varnakulasingam Ganasampanthan Tizekesan


Sri Lanka Institute of Information Technology Sri Lanka Institute of Information Technology
Department of Information Technology Department of Information Technology
Malabe, Sri Lanka Malabe, Sri Lanka

Dhammika De Silva
Sri Lanka Institute of Information Technology (SLIIT)
Faculty of Computing
Metro, Sri Lanka

Abstract:- Vision is essential for human being to sense I. INTRODUCTION


the world. In case of survival it is indispensable.
Unfortunately, there lot of people live with vision Vision is related with sensing with eye. Continuously
impairment throughout the world. They are facing lot there are ongoing process such as organization,
of challenges daily to survive in the world. Project identification, and interpretation of sensory information to
blind assistant is to provide some sort of support to sense environment. Eyes are the principal organs of the
blind people through computer technologies to visual field. They enable visual sensory data to be received
overcome their challenges. and processed.

The project is about to provide a blind assistant Sound is other important sense which will related
mobile application build up with the combination of with process of identification and interpretation of sound
image processing and sound recognition technologies. around environment. Ears are main organs in hearing. Ears
Even though there is lot of blind assistant products in have the ability of sensing sound waves. Identification and
market, they have failed to reach visually impaired interpretation of sound waves happen with the aid of brain
community. So, we have done series of literature primarily in temporal lobe.
reviews on already existing products to identify obstacle
in build up a successful blind assistant product. Those Sound and sight are interlinked with each other. One
analyses helped us to install the blind assistant product. will ensure the reality of other one in case of blindly
Machine learning, Image processing, Sound recognition situations. Five senses are very much important perceiving
technologies assisted in the implementation of the the world reality. Out of them sound and sight take much
product. part.

This product will help the blind people to identify Our world revolves around the preciseness of the five
roads, road signs, vehicle and other road side objects senses. Loss of any one of the senses will create chaos in
and guide the blind through voice output. The project one’s life. Out of five senses vision is far most important in
focuses on challenges face by blind in road side scenario survival as we deal with large scale of visual sensory
and provides solution to overcome those challenges. The information on daily basis. Visual impairment is a
product is a mobile application which could be handled decreased ability to sense the light to a degree. Globally,
by blind easily. around 1.3 billion individuals are estimated to be living
with some type of vision loss. 36 million of them are blind
Keywords:- Machine Learning; Image Processing, Sound [1]. Vision loss will create major impact of people’s daily
Recognition, Vision Impairement. activities. They need to depend on others for their survival.
There are lot of blind people who did great achievements
by overcoming these struggles, but they are less in number
when compare with millions of blind communities. So, it is
our responsibility to assist blind people in whichever way
we could. Evolution of braille letter, blind sticks, sensor
technology, and medical science provide a lot of assistance
to blind in perceiving the visual world.

IJISRT19OCT1920 www.ijisrt.com 415


Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Computer technology also evolved to assist blind This way, the braille device provides him with data that
through various object detection technologies and helps him interpret the world around him. [2].
algorithms. The project also focuses on assisting the blind
people with the help of image recognition and sound Project researched by Chucai Yi Roberto W. Flores
recognition technologies. Image processing is a procedure Ricardo Chincha YingLi Tian Is a camera-based system
of extraction of image features suitable for computer intended to help blind or visually impaired individuals
processing where image acquisition will be carried out by discover their daily requirements. This suggested scheme
an image capturing devices equivalent to eye in natural will identify objects on a camera-based network and on
phenomena. Identification and interpretation of images will corresponding-based recognition. Here data sets of daily
be done through series of enhancement, segmentation and necessities have been gathered, and Speed-Up Efficient
extraction processes. Functions and Scale Invariant Feature Transform feature
Sound recognition technology able to identify and descriptors have been used to conduct object recognition.
interpret various sounds through data processing feature Multiple camera system is constructed by putting a camera
extraction and classification algorithms. Both sound at key places in the enclosed environment of a blind user's
recognition and image processing technologies will be daily routines. The cameras supervise the situation around
combined to identify an object accurately. In some these set places and tell blind people of the places of their
situations, we need to guess the object based on the sound. requested products. Matching-based image recognition is
This idea is installed in our system to provide better conducted throughout this phase to locate items. Then the
perception. program will submit the most comparable item, even if it
matches the length, and advise the blind person to get near
This project focuses on road side scenario specially to its place. Later, the camera connected to the disabled
travelling through pavement, identification of road signs, person will conduct more identification to confirm the
vehicle detection and location identification. The presence of the requested item.
application can provide instruction to end user through the
voice output. The System main goal is to capture the image Muhammad Sheikh Sadi, Saifuddin Mahmud, Md.
from the video stream using camera mounted in spectacle Mostafa Kamal, Abu Ibne have proposed a solution for
and surrounding audio with microphone and send it to blind individuals to navigate securely by identifying the
mobile phone for further processing and instruct the blind barrier and producing the associated warning signal as per
via earphone. The system is dividing into four main the range of the barrier. The approach is given by the
components such as sign board identification, Assist along development of a moving assistant integrated in a spectacle
pavement, Vehicle identification and location glass between a barrier detection unit and also an alarm
identification. generator. There will only be one ultrasonic sensor in the
barrier detection unit that can reach a range of 3 meters as
II. RELATED WORK well as an angle of 60 degrees to spot barriers. The barrier
detection unit produces a large-frequency signal across an
Visually impaired people desperately need help with ultrasonic sensor and evaluates the signal return back by
mobility because barriers can cause injury. There are the sensor. Therefore, the length of the barrier is evaluated
already several techniques for offering visually impaired and so this data about the barrier is transmitted to the blind
mobility assistance, ranging from human assistants to through the use of an alert generator that produces an alarm
contemporary appliances. equivalent to the range.

In past years, several types of research have been Umer ,Kumar, and A. Shubham Melvin Felix.
taken on removing obstacle of visually impaired people. Veeramuthu has been researching a solution implemented
Throughout, the time computer technology influenced in via Android's mobile app that focuses on voice assistant,
research for assisting blind people. Image processing image recognition, currency recognition, e-book, chat bot,
technology has been playing much role in object detection etc The app can help in recognizing objects in the
in such researches. Many research projects and technology environment by using a voice command, perform text
proposals are continuously appearing in order to detect assessment to acknowledge the text in the hard copy
objects accurately. document It will be an effective way for blind individuals
to communicate with the environment using technology as
Project researched by Gude, M.Osterby, S.Soltveit well as using the technology's equipment [5].
concentrated on individuals with blind and visual
impairments and created a concept to help them navigate Md.Siddiqur Rahman Tanveer, M.M.A.Hashem and
indoors and outdoors and identify items in an environment. Md.Kowsar Hossain have proposed a project which could
This project used as a virtual eye (VE) streaming video help the blind person by tracking his movement and
device used to browse the workplace. Computer technology helping to rescue him if he is lost. Blind person is
interprets the video stream to recognize the object and to navigated with spectacle mounted android application.
calculate its distance. This can be done by tagging Latitude and longitude collected using GPS sends to server.
"Semacodes" items. Decoded data is sent to a device that Another application could track the movement by getting
can generate braille that must be held by the individual. response from server. In the system Obstacles are informed
by using voice output to guide the blind. Ultra-sonic range

IJISRT19OCT1920 www.ijisrt.com 416


Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
finder has been used to detect position of the obstacle and internet or digital form of information with the help of
to measure the distance. voice assistant system instead of braille displays and
keywords. Audio input will help to blind access e- mail,
Prince Bose,Apurva Malpthak,Utkarsh Bansal,Ashish daily news, weather forecast and maintain notes.
Harsola have proposed the system to assist blind in using

Vincent Gaudissart, Silvio Ferreira, Céline Thillou, sonic sensor sends signals from its transmitter and receives
Bernard Gosselin have researched to provide textual back the signal after it collides with an obstacle. Distance
information to blind or visually impaired people as voice can be calculated using the signal travelling time.
output .An embedded device is used to read textual
information in day to day activities such as banknotes, The image taken from the camera may not be a clear
schedule of train books, postal letter .Here the image taken image. It may be a distorted, blurry, noisy image, dark or in
by the embedded camera is sent to image processing any form that cannot be used for identification of obstacle.
module and text to speech module and then synthesis the Using unclear image may cause wrong assistive to blind.
information to user [8]. So, this system has better pre -processing mechanism to
enhance the quality of image and obstacles should be
In most of the existing projects augmented reality and identified based on ML model. Therefore, effective and
sensor technology were used to detect obstacles in front of efficient machine learning algorithms has been used to
user. Some research has been done on image processing, predict obstacle accurately.
but they are all about how to assist blind in indoor activities
such as detecting daily utile, currency, reading book, house There have been researches done on vehicle detection
hold items and so on. Project researched for outdoor based on sound, but they are not incorporated with devices
activities are commonly based on sensor technologies related to assisting blinds. This research has proposed the
where obstacles and its distance will be detected but details idea of combining image processing and sound recognition
about object were ignored and, they are in some way costly. in the vehicle detection process to increase accuracy. In real
time humans are detecting an object with the help of five
Earlier researches done based on image processing senses. In subtle level these five senses are interlinked to
have limited accuracy in critical situations such as perceive the object accurately. Out of five senses sight and
identifying far away vehicles, vehicles hidden from view sound are playing major roles in object detection. Based on
by factors such as decorations and banners and vehicle above process the idea of combining image processing and
being fake due to image illusion. So, this research views an sound recognition technologies to detect vehicles has been
object as a combination of sound and image where one will proposed.
ensure the reality of other one. This project has been
carried out with an idea of combining image processing and Vehicle sounds are complex and can be influenced by
sound recognition technology in identifying some objects various variables such as the type of car, gear, amount of
such as vehicles, people, and signs to improve the accuracy cylinders, selection of mufflers, maintenance status,
of blind assistant product. operating velocity, distance from the microphone, tires and
the highway the car is traveling on. Vehicle sound noises
Many devices implemented using Image Processing have been removed to enhance car detection effectiveness.
or some other technologies like sensors for the visually
impaired people. But these devices don’t have traffic sign Vehicle sound has many attributes, loudness, energy
detection features. The goal of this part (Traffic sign of sound, pitch and wave length. These attributes need to be
detection) is helping the visually impaired people who are analyzed to detect vehicles. Effective and efficient machine
mostly struggling while crossing the road. In normal learning algorithms have been used to predict vehicle
situations they can act independently with the help of their accurately. Datasets have been created by capturing vehicle
White Cane. Only in situations like crossing the road, sounds from environment. After removing noises from
finding a bus stop they depend on a third person and they audio features of audio has been extracted and stored.
must ask whether there is a zebra cross or bus stop. So, this
device has the feature of detecting the traffic signs and Microphone has been installed to receive
alerts the user. environmental sound. Vehicle detection will be done in real
time. In real time, input audio data will be processed and
Detecting the obstacles is not only enough for a blind compared with trained models. Output result will be
to avoid injuries due to obstacles as they don’t know where combined with image-based vehicle detection output and
this obstacle is in. So, the system is using ultra sonic sensor result should be delivered to user as voice output.
to calculate the distance of obstacle from the blind. Ultra-

IJISRT19OCT1920 www.ijisrt.com 417


Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
III. ARCHITECTURE OF THE SYSTEM

Fig 1:- High level Architecture

IV. METHODOLOGY
1) Detect the obstacles and vehicles.
First requirements were collected by having 2) Vehicle Detection based on Vehicle sound
conversation with the blind people. After that several 3) Measure the distance between object and person.
research papers were read and related project ideas,
technology stacks and nonfunctional requirements are 1) Detect the obstacles and vehicles.
perceived and analyzed with the blind people’s To detect the vehicles and obstacles, datasets are
requirements. After that new system was proposed which preprocessed and trained in COLAB using RCNN. As
can satisfy blind’s basic requirement and more efficient in normal laptops didn’t have enough GPU, Google COLAB
every aspect. had been chosen as it is a cloud-based data science
workspace. After that trained models were converted to
Before start to implement the coding phase, tensor flow lite model as it is designed specific for mobile
requirements were checked with the possibility regard with devices.
time, cost and scope and high-level architectural diagram
was drawn to give the direction to finish the project 2) Vehicle Detection based on Vehicle sound
successfully. Then research parts were studied well by the
team and technology stacks and tools were identified which a) Extraction of Features of Audio and Audio Dataset
are as follows: Creation
1) Adroid Studio Audio Dataset has been created by obtaining sounds
2) Colab from environment. Noise of the input sound has been
3) Tensorflow cleared and features of audio such as amplitude, decibel
4) Ultrosonic sensor value have been extracted and stored in csv file. In such a
way audio dataset has been created.
According to our proposed system main
functionalities are as follows.

Fig 2:- Detection of Vehicle based on Vehicle Sound

IJISRT19OCT1920 www.ijisrt.com 418


Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
In Real-time audio will be obtained through mobile sound input. From frequency array the decibel value can be
microphone. Thereafter features of audio such as found using logarithmic equation.
amplitude, decibel value will be extracted and stored.

When an audio is received by the system sampling


frequency and sound object will be read from it and stored.
Then the audio file data type will be read out. As, the
system recognize only stereo type audio file. So, if it
received mono channel audio file, then it will be converted
in to stereo channel audio file. Amplitude will be extracted
from the sound object and stored in csv file.
Many amplitude and power of the sound will be
Researchers suggest amplitude and decibel values are generated each second. So, mean of the amplitude and
useful in sound recognition [9]. STFT (Short Term Fourier power needed to be found to analyze vehicle sound.
Transform) will be used to produce frequency array from

Fig 3:- Decibel vs Frequency graph

b) Audio dataset training and object detection V. RESULTS & DISCUSSION


After creation of audio dataset, system has been
trained with those datasets by using machine learning In Main goal of the system is assisting blind in
algorithms. By using random forest algorithm, system has roadside scenarios like walking in pavement, waiting to
been trained with datasets obtained as samples. The board a bus or three- wheeler, get information about current
features extracted from new audio input will be compared location. The System contains four main components such
with the trained model. Output result will be sent to main as sign board identification, Assist along pavement,
system for further processing where output result will be Vehicle identification and location identification. A blind
used to detect vehicle by combining image processing- person could access the different features of the system
based vehicle detection techniques. through voice command. Response from the device is
delivered to blind person as a voice. The device is a mobile
3) Measure the Distance between Object and Person. application.
Using ultra-sonic sensors, the gap between the
individual and barriers is recognized by evaluating the time The system captured and detected the Traffic Signs,
taken to move from the senor by ultra-sonic waves and Pavement obstacles and Vehicles (bus and three-wheelers).
returning to the sensor after striking the barriers. These raw data are analyzed by image processing and
extracted. Extracted data are compared with the ML models
Finally, all the components were integrated within for detecting the object and result is converted to audio
android studio and using Google’s voice API outputs are output.
generated as an audio respect to the mobile camera
streaming.

IJISRT19OCT1920 www.ijisrt.com 419


Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
If blind person needs to detect a bus or three-wheeler, input receive from surrounding environment. In such
he could access the feature of vehicle detection through situation, both identifications will be combined to provide
voice command. The device is using image processing and result which is more accurate. In all instance’s user will be
sound recognition techniques in detecting vehicles. instructed through voice output.
Therefore, vehicle could be detected from video received
through camera and the sound of the engine received ACKNOWLEDGMENT
through microphone. Results from both detection
techniques are combined to give accurate prediction about First of all, we would like to thank the Sri Lankan
the vehicle. Detected vehicle details are delivered to user as Institute of Information Technology for offering the chance
voice output. and platform for developing and exposing our talent in the
field of IT studies and for all the collective agreements that
Vehicles are detected from engine sound through have been made to make this project successful Without the
series of sound recognition techniques. System is trained guidance of our research supervisor Mr. Dammika De silva,
with created datasets through machine learning algorithms. this project would not have been feasible. We would like to
Features extracted from vehicle sound which is gathered in thank him sincerely for the precious ideas and support for
real time are compared with trained data sets and vehicle is making this project a success for us. His guidance and
predicted. Output result is sent to main system for further concern is carefully apprehended throughout our project
processing where output result is used to detect object by studies. And we would also like to appreciate the panel
combining other object detection techniques. members ' guidance.

Vehicle detection accuracy is increased by combining REFERENCES


image processing and sound recognition technology.
System detects the category of vehicle whether it is bus or [1]. “Blindness and vision impairement”, who.int(11-
three-wheeler based on vehicle sound. Meanwhile, vehicle october-2018). [Online]. Available:
is identified from video input receive from surrounding https://www.who.int/news-room/fact-
environment. In such situation, both identifications is sheets/detail/blindness-and-visual-impairment.
combined to provide final result which is more accurate. [Accessed: 02-March-2019].
Even though result predicted only from video input or [2]. Guide R, Østerby M and Soltveit S , "Blind
audio input in separate instances is less accurate, Vehicle navigation and object recognition",
detection from sound acted as supplement to the vehicle daimi.au.dk,Laboratory for Computational
detection from video and vice versa in case any of the Stochastics, University of Aarhus, Denmark(2008).
detection method was failed. [Online]. Available:
http://www.daimi.au.dk/~mdz/BlindNavigation_and_
VI. FUTURE WORK ObjectRecognition.pdf. [Accessed: 20-February-
2019].
In future System will be enhanced to reduce response [3]. hucai Yi,Roberto W. Flores,Ricardo Chincha and
time and identify obstacles and other objects more YingLi Tian, “Finding objects for assisting blind
accurately. And, system will be upgraded with night vision. people”, link.springer.com (07-February-2013).
Furthermore, system will be developed as a device using [Online]. Available:
suitable hardware specifications (wide angle camera, https://link.springer.com/article/10.1007%2Fs13721-
stabilizer, mic) etc. Backend will be deployed on a server 013-0026-x. [Accessed: 25-February-2019].
as it could facilitate the user to give assistance to more [4]. Shubham Melvin Felix , Sumer Kumar and A.
objects and to make user to easily upgrade new features Veeramuthu,” A Smart Personal AI Assistant for
without troubling. As for now, Bus and Three-wheeler are Visually Impaired People”, ieeexplore.ieee.org, IEEE
concentrated on vehicle identification process. In future (2018). [Online]. Available:
project could be expanded to all sorts of transport vehicles. https://ieeexplore.ieee.org/document/8553750.
[Accessed: 21-February-2019]
VII. CONCLUSION [5]. Muhammad Sheikh Sadi, Saifuddin Mahmud, Md.
Mostafa Kamal and Abu Ibne Bayazid,” Automated
Project goal is assisting blind in road side scenarios Walk-in Assistant for the Blinds”, ieeexplore.ieee.org,
like walking in pavement and boarding a transport vehicle. IEEE (2014). [Online]. Available:
System provides features like sign board identification, https://ieeexplore.ieee.org/document/6919037.
assist along pavement, Vehicle identification and location [Accessed: 25-February-2019].
identification to achieve the status of perfect blind assistant [6]. Md. Siddiqur Rahman Tanveer, M.M.A. Hashem and
product. User could access the above features through voice Md. Kowsar Hossain,” Android assistant EyeMate for
command which provides ease of communication between blind and blind tracker”, ieeexplore.ieee.org, IEEE
user and device. Vehicle detection accuracy has been (2015). [Online]. Available:
increased by combining image processing and sound https://ieeexplore.ieee.org/document/7488080.
recognition technology. System could detect the category of [Accessed: 1-March-2019].
vehicle whether it is bus or three-wheeler based on vehicle
sound. Meanwhile, vehicle could be identified from video

IJISRT19OCT1920 www.ijisrt.com 420


Volume 4, Issue 10, October – 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[7]. Prince Bose, Apurva Malpthak, Utkarsh Bansal and
Ashish Harsola,” Digital Assistant for the Blind”,
ieeexplore.ieee.org. IEEE (2017). [Online]. Available:
https://ieeexplore.ieee.org/document/8226327.
[Accessed: 21-February-2019].
[8]. Vincent Gaudissart, Silvio Ferreira, Céline Thillou
and Bernard Gosselin,” Sypole: A mobile assistant for
the blind”, ieeexplore.ieee.org, IEEE (2005).
[Online]. Available:
https://ieeexplore.ieee.org/document/7078510.
[Accessed: 18-February-2019].
[9]. Hoiem D , Yan Ke and R. Sukthankar ,”SOLAR:
sound object localization and retrieval in complex
audio environments”, ieeexplore.ieee.org, IEEE
(2005). [Online]. Available:
https://ieeexplore.ieee.org/abstract/document/141633
2. [Accessed: 28-February-2019].

IJISRT19OCT1920 www.ijisrt.com 421

You might also like