Shen Li Final Paper PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Autonomous Wheelchair for ALS and CP Patients

Sensor Integration and Obstacle Avoidance Algorithms



Abstract The aim of this research project is to design, build,
and program an autonomous wheelchair for people with severe
disabilities, including Amyotrophic Lateral Sclerosis (ALS) and
Cerebral Palsy (CP) patients, in order to provide them increased
freedom of mobility. At this stage of the project, the wheelchair is
controlled by a joystick in an effort to work towards obstacle
avoidance and stair detection. Several sensors have been
integrated to provide environmental information, including
infrared sensors, ultrasonic sensors, and LiDAR. The Robot
Operating System (ROS) was used to edit all programs and
process the data from the sensors. This data could then be used
to control the motor speed and direction. The results of testing
each of the sensors and algorithms on the wheelchair test bed will
be presented in this paper.
I. INTRODUCTION
Current commercial electronic wheelchairs are commonly
used for the disabled and senior citizens. Electric wheelchairs
have become especially beneficial to those with disabilities as
they improve patients living qualities, independence, and self-
esteem [6]. Electric wheelchairs are mostly controlled by
manual joysticks, requiring users to have adequate dexterity in
terms of motor skills and the ability to make decisions in
emergency situations. For severely disabled patients, including
Amyotrophic Lateral Sclerosis (ALS) and Cerebral Palsy (CP)
patients, there is a loss of the cognitive skills needed to fully
control the wheelchair. Thus, a completely autonomous
wheelchair would benefit ALS and CP patients by giving them
increased navigational freedom in their in daily lives.
Shared control, which is the combination of the users
operation and the robots assessment of the environment, is one
of the core parts of an intelligent wheelchair. Montesano, et. al.
chose to use a final destination strategy of shared control,
which means that users will be only in charge of high level
decisions [6]. Montesano, et. al. implemented a touchscreen
system, which presents a 3-D visual online map of the
environment instead of a priori map, allowing real time
information to be updated on the screen. Users can choose their
commands by touching the screen, requiring users to have the
ability to move their arms and fingers.
However, for some patients who still have the ability to use
higher level interface, it would be frustrating for them if they
had no control over the wheelchairs path. To resolve this
issue, Nguyen, et. al. developed a semi-autonomous machine
which includes both low rate information interfaces, such as
brain computer interfaces (BCIs), and high rate information
interfaces, such as a joystick or a head movement detector [2].
In Nguyen, et. al.s semi-autonomous wheelchair, if no
collision detected, users will have full control, but if there is an
immediate collision detected, navigation algorithms will take
charge [2]. User interface is the platform on which users can
talk to the computer. The semi-autonomous machine utilizes
four available interface types: joystick, head movement, brain
computer interface (BCI), and iPhone4/iPad.
Another example of shared control strategy is developed in
an intelligent wheelchair, PerMMA, introduced by Wang, et.
al.. PerMMA is a personal assistive robot which includes a
robotic wheelchair and two Manus ARMs, each of which
provides a 6 degree of freedom and a two-finger gripper [5].
PerMMA has three controlling modes, local user mode, remote
user mode and cooperative mode. The cooperative mode
allows a caregiver to work alongside the patient in controlling
the wheelchair. The second generation of PerMMA, also has
the added ability to traverse on difficult terrains, such as curbs
and stairs via four independent pneumatic actuators, as shown
in Figure 1.

Figure 1 Curb Climbing PerMMA by Wang, et. al [5]
The navigation system should be capable of detecting
dynamic as well as static environments. For example, people in
motion need to be detected and avoided. Unpredictable
scenarios should also be considered so that there is no need to
implement a priori map on the wheelchair [6]. Priori map
works well for a specific known environment, but then the
wheelchair will not be able to be used in other environments.
And inputting and storing a priori map for each situation is
redundant. Montesano, et. al. chose to use a binary occupancy
grid map to do path planning [6]. This method is simple to
implement but requires a lot of memory space since all the grid
Shen Li, Susmita Sanyal, Tianyu Zhang, Kelilah Wolkowicz, Sean Brennan
S. Li and S. Sanyal are with the Department of Computer Science and
Engineering, College of Engineering, the Pennsylvania State University, State
College, PA, 16802, USA (e-mail: [email protected])
T. Zhang, K. Wolkowicz, S. Brennan are with the Department of
Mechanical and Nuclear Engineering, College of Engineering, the
Pennsylvania State University, State College, PA, 16802, USA

maps have to be saved on the computer. Another way to
perform path planning is presented by Ladha, et. al., shown in
Figure 2. Ladha, et. al. find the longest possible path based on
priori factors and an evaluation algorithm on the data retrieved
from a 240-degree LiDAR-scanned area [8]. There is one more
method is presented by Nguyen, et. al., who used a Bayesian
recursive method within the Levenberg Marquardt
optimization approach to find the most appropriate direction of
travel based on limited command information and the
environment data [2].

Figure 2 Path Planning by Ladha, et. al [8]
II. DEVICES INTEGRATION
The autonomous wheelchair was fabricated using a Jazzy
Select 6 electric wheelchair, shown in Figure 3.

Figure 3 Wheelchair
Multiple electronic devices were implemented onto the
wheelchair to ensure user safety. A Hokuyo URG-04LX Laser
RangeFinder (LiDAR), a remote sensing technology that
analyzes reflected light from a laser [1], is the eye of the
wheelchair. The LiDAR is installed on the top of the
wheelchair to render it a wide range of view. It continuously
scans an at least 240-degree detection range of the
surroundings and generates a 2-D map to give the computer
position and size information of all the obstacles in that area
accurately and efficiently. Specific algorithms are implemented
to handle the LiDAR data, and will be discussed in the
following section.
Infrared sensors are mounted for stair and curb detection.
This sensor will warn the wheelchair if it is approaching a
stairway in order for the wheelchair to have enough time to
slow down, brake, or turn. There are also two emergency
switches included on the wheelchair test-bed. The first is an
emergency hard-stop which allows the researchers to shut
down the whole system, including the computer and motors, in
case anything goes wrong while testing. The second is a safety
soft-stop assembled for the users to just stop the motors as an
emergency brake. In this way, we can still keep the computer
log files even though the users stop the motor.
On the users right hand side, there is a Jazzy Select GT GC
Joystick (CTLDC1574) currently assembled for testing. The
joystick sends real-time user-commands to the computer to
dictate direction and velocity. The computer will then process
the information and control the wheels according to users
commands.
Wheel encoders have been mounted and will feed back the
real time wheel speed data to the computer to calibrate the
wheels in case of inaccuracy. The motor controller is the
interface between computer and the wheels. It will receive
commands from the computer and send them directly to the
wheels. In the future, a webcam will be utilized as a video
camera that feeds images of the environment in real time to the
users. The IMU (inertial measurement unit), which is an
electronic device that measures velocity, orientation, and
gravitational forces, will also be mounted to make sure the
wheelchair accurately navigates its environment.
To coordinate all the devices, a Linux computer running the
Robot Operating System (ROS) is mounted on the wheelchair.
Programs are implemented in Python and incorporated into
ROS. Each electronic device has a corresponding node to
communicate and push commands to the motor controller. For
example, the IR sensor, in Figure 4, will keep detecting the
distance from the wheelchair to the ground and send it back to
the computer ROS node. The node will process the data and
decide whether or not a stair is present. Based on that decision,
the node will send messages to the motor controller node to
control the wheels to react to the situations.

Figure 4 IR sensor - stair detection
III. ALGORITHMS DEVELOPMENT
The core part of autonomous wheelchair path planning is
the LiDAR data processing algorithm. All the objects around

the wheelchair will be represented as a set of points, indicated
by rho and theta on the polar coordinate system. The LiDAR
visualization is shown in Figure 5; the object in front of the
wheelchair is too close to the wheelchair and collisions may
happen so that it is in the dangerous zone. The object is
highlighted by a red ball. In Figure 6, the object is farther from
the wheelchair, and is indicated by a green ball since it is in the
safe zone.

Figure 5 Dangerous zone

Figure 6 Safe Zone
The real time data the computer will receive from the
LiDAR is a 2D map of points. According to Norouzi, to
interpret the data, there are usually two methods [7].
1. Point to point matching is a method to eliminate noises
by comparing two consecutive scans and then
iteratively minimize the distance between the scans.
This method does not transform the environment data
to any geometry features. It is just dealing with pure
points. So normal interpretation rule, like a circle of
points usually representing a person, cannot be applied.
2. Feature to feature matching transforms all the raw scan
data to geometry features, usually lines. Through line
extraction procedure, the data will be interpreted better
with more accuracy and efficiency but less memory.
Feature to feature algorithms are implemented on the
wheelchair on Python. The first step of information extraction
from LiDAR data is line extraction. There are multiple ways to
develop a line extraction program. According to Nguyen, the
fastest algorithm is Split and Merge [4]. Another method for
line extraction is the Hough Transform, as presented by
Norouzi [7]. An explanation of each these methods is given in
the following subsections. But based on Nguyens comparison
test results, in Figure 7, the speed of Split and Merge, which is
1780 Hz, is much faster than the Hough transform, which is
only 6.6 Hz. And on the perspective of precision, Split and
Merge is still slightly better than the Hough Transform [4].

Figure 7 Line Extraction Algorithms Comparison by Nguyen [4]
A. Split and Merge
The basic idea of split and merge is that if the one fitting
line generated from a set of points is not accurately
representative of all the points in the set, then split the
set of points into two halves. From each of the two
halves, a new regression line can be generated and it will
be more representative of the half set. Based on
Nguyens pseudo code, shown in Figure 8, linear
regression is processed on all the LiDAR raw data points
[4]. If the error is larger than a given threshold, then the
entire set of points will be split into two sets on a pivot
point which is the point with the maximum distance to
the current regression line of the whole set. The splitting
procedure is a recursion algorithm which will result in
many short regression line segments because the pivot
point may be incorporated into the first subset of the
original set or into the second one, which will cause
issues of inaccuracy. Therefore, after splitting, merging
is necessary. The adjacent collinear regression lines or
points must be merged into one line so that ultimately
there will not be too many repeated similar lines.

Figure 8 Split and Merge Pseudo Code by Nguyen [4]
B. Hough Transform
The basic idea of Hough Transform is that each point in
the map will vote for the lines which pass this point. So
finally the line with the most votes will be the best fitting
line. According to Norouzis research, the Hough
Transform can be implemented by recursively counting
votes of all the points in the map [7]. A 2-D array is
generated, each element of which represents the rho and
theta of a line in polar coordinates. Counting how many
raw data points are near this line can be done by

calculating the distance between each point and this line.
After getting the votes of all the lines, the best fitting
line can be generated. Then a second best fitting line can
be generated by repeating the same procedure with the
all the points except those points which have already
voted for the best fitting line.

After line extraction, the ray casting algorithm was
implemented to find the best-available path. Having received
commands from the joystick, the computer will use Ray
casting to check if the current path is safe or not. If the path
planned is safe and without any obstacles, the wheelchair will
follow the users commands via the joystick. However, if
collisions are immediate, the ray casting algorithm will take
charge and generate a safer alternative route.
According to Sauze, ray casting is implemented by emitting
many rays from the wheelchair to the 40-degree area in front
of the wheelchair [3]. By calculating the distances between the
wheelchair and obstacles and applying noise-elimminating
algorithms, a most appropriate path line can be generated based
on the rays. But the wheelchair will not change its moving
direction until a threshold value is reached. At that time, the
wheelchair will switch to the new pathway immediately. After
passing the obstacle, the wheelchair should gradually switch
back to the original direction , as shown in Figure 9.


Figure 9 Ray casting by Sauze [3]
Our implementation of Ray casting with the wheelchair
is slightly different from Sauzes algorithm. The wheelchair
and its position within the next second is represented by a
rectangle instead of a ray. In any point in time, there can not
be any obstacle lines, extracted from raw LiDAR data, in the
rectangle. This exclusion ensures that the current and next
position of the wheelchair is safe from possible collisions. The
strategy is to keep rotating the rectangle until a possible path
generated or a deadend detected. These results are discussed in
the following section.
IV. RESULTS
The LiDAR was used to scan a hallway in the Reber
Building at the Pennsylvania State University. The raw data
taken from the LiDAR is plotted in Figure 10. As we can see,
the wheelchair is at the origin position, and there are two walls
on each side. There is also an obstacle in front of the
wheelchair. After Split and Merge line extraction algorithm, 77
line segments are retrieved from 512 raw data points. By
changing the threshold of merging, less or more line segments
can be extracted. The resulting lines extracted from the raw
data are shown in different colors in Figure 11.

Figure 10 Raw data from LiDAR

Figure 11 Lines extracted from data of Figure 10 using the Split and Merge
algorithm
The result of applying the Hough Transform to the raw data
of Figure 12 is shown in Figure 13, which presents the lines, in
different colors, extracted from Figure 12.

Figure 12 Raw data from LiDAR


Figure 13 Lines extracted from data of Figure 12 using the Hough Transform
algorithm
Comparing the results of the Split and Merge and the
Hough Transform algorithms, it was observed that the Split
and Merge is the better choice to analyze the real time data
from the LiDAR, because of its superiority in speed and
accuracy.
The ray casting algorithm was then tested based on the
lines extracted from raw LiDAR data by Split and Merge
algorithm. In Figures 14 and 15, the rotation procedure fails to
find an available path, which means that the wheelchair can
never find a way to avoid the obstacles. The wheelchair needs
to go back. In Figures 16 and 17, however, an available path is
generated from pseudo LiDAR data.


Figure 9 Original position of wheelchair in real data

Figure 10 Wheelchair position after rotation - failed to generate an adequate
path

Figure 11 Original position of wheelchair in pseudo data

Figure 12 Wheelchair position after rotation successfully generated an
adequate path
Then the Split and Merge line extraction algorithm and the
ray casting obstacle avoidance algorithm are tested on ROS
with real data in a lab in Reber Building at the Pennsylvania
State University. The results, displayed on the rviz program in
ROS, are shown in Figure 18, 19, 20, 21, and 22. As we can
see from these figures, the wheelchair can find an appropriate
path or detect a deadend in different situations.


Figure 18 Algorithm testing on rviz: it is safe for the wheelchair to move
forward














Figure 19 Algorithm testing on rviz: it is not safe for the wheelchair to move
forward and it needs to turn right














Figure 20 Algorithm testing on rviz: it is not safe for the wheelchair to move
forward and it needs to turn left
























Figure 21 Algorithm testing on rviz: it is not safe for the wheelchair to move
forward and it needs to go back
V. CONCLUSION
Currently, the wheelchair is able to avoid some obstacles
while controlled by the joystick. The Split and Merge line
extraction algorithm and the ray casting obstacle avoidance
algorithm will interpret the raw data from the LiDAR to find a
safe pathway or detect a deadend. The angle which the
wheelchair needs to turn to avoid all the obstacles is calculated
by the algorithms as well, so the angle will be ready to be sent
to the motor controller to make the wheelchair switch into the
new path.
The main issue in algorithm developing is the speed of
algorithms. Because if the algorithms are slow and the LiDAR
is keeping feeding new scan data into the computer, then the
data will be queued. In the end, the lines extracted and new
paths planned by the algorithms will have a lag after the new
coming data and the current planned path may be the
appropriate path 5 seconds ago, which will cause safety issues.
Currently the Split and Merge and ray casting are still slower
than the new LiDAR data incomings. So the compromise now
is to ignore the new data when running the algorithms to make
sure every time the algorithms are dealing with up-to-date
data. In the future, this problem must be fixed.
The ultimate goal of the project is to include a brain
machine interface for users to interact with and choose their
final destinations. The wheelchair will navigate to the
destination autonomously. Electroencephalography (EEG) is
the best choice for ALS and CP patients as a medium to
transport neurological data. Patients need only mentally focus
on a final destination, represented by a flashing icon, and the
command information will travel from the brain to the BCI as
an EEG wave. The Brain Computer Interface (BCI) will then
translate the information and give control signals to the
computer. There are also some other methods of Human
Computer Interface integration utilized by autonomous
wheelchairs, such as touch screens and eye tracking systems.
Since our target users are ALS and CP patients with minimum
mobility, EEG currently appears the best choice.


VI. REFERENCES

[1] "Wikipedia," Wikimedia Foundation, 12 7 2014. [Online].
Available: http://en.wikipedia.org/wiki/Lidar. [Accessed
15 7 2014].
[2] A. V. Nguyen, L. B. Nguyen, S. Su and H. T. Nguyen,
"Shared Control Strategies for Human Machine Interface
in an Intelligent Wheelchair," in 35th Annual International
Conference of the IEEE EMBS, Osaka, Japan, 2013.
[3] C. Sauze and M. Neal, "A Raycast Approach to Collision
Avoidance in Sailing Robots," in the 3rd International
Robotic Sailing Conference, Kingston, Ontario, Canada,
2010.
[4] V. Nguyen, S. Gchter, A. Martinelli, N. Tomatis and R.
Siegwart, "A comparison of line extraction algorithms
using 2D range data for indoor mobile robotics," Auton
Robot, vol. 23, no. 2, p. 97111, 2007.
[5] H. Wang, G. G. Grindle, J. Candiotti, C. Chung, M. Shino,
E. Houston and R. A. Cooper, "The Personal Mobility and
Manipulation Appliance (PerMMA): a Robotic
Wheelchair with Advanced Mobility and Manipulation,"
in 34th Annual International Conference of the IEEE
EMBS, San Diego, California USA, 2012.
[6] L. Montesano, J. Minguez, M. Daz and S. Bhaskar,
"Towards an Intelligent Wheelchair System for Cerebral
Palsy Users," IEEE Transactions on Neural Systems and
Rehabilitation Engineering, vol. 18, no. 2, pp. 193-202,
2010.
[7] M. Norouzi, M. Yaghobi, M. R. Siboni and M. Jadaliha,
"Recursive Line Extraction Algorithm from 2D Laser
Scanner Applied toNavigation a Mobile Robot," in IEEE
International Conference on Robotics and Biomimetics,
Bangkok, 2009.
[8] S. Ladha, D. K. Kumar, P. Bhalla and A. Jain, "Use of
LIDAR for Obstacle Avoidance by an Autonomous Aerial
Vehicle," in International Aerial Robotics Competition
Synposium, Birla Institute of Technology and Science
Pilani, Dubai Campus, UAE, 2011.

You might also like