Implementation of An Object-Grasping Robot Arm Using Stereo Vision Measurement and Fuzzy Control

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Int. J. Fuzzy Syst.

(2015) 17(2):193–205
DOI 10.1007/s40815-015-0019-2

Implementation of an Object-Grasping Robot Arm Using Stereo


Vision Measurement and Fuzzy Control
Jun-Wei Chang1 • Rong-Jyue Wang2 • Wen-June Wang1 •

Cheng-Hao Huang3

Received: 23 October 2014 / Revised: 16 February 2015 / Accepted: 6 March 2015 / Published online: 22 March 2015
 Taiwan Fuzzy Systems Association and Springer-Verlag Berlin Heidelberg 2015

Abstract In this paper, a method using a stereo vision errors between the gripper and target such that the gripper
device and fuzzy control to guide a robot arm to grasp a can grasp the target. Using the proposed method, the stereo
target object is proposed. The robot arm has five degrees of vision device can not only locate the target object but also
freedom including a gripper and four joints. The stereo trace the position of the robot arm until the target object is
vision device located beside the arm captures images of the grasped. Finally, some experiments are conducted to
target and the gripper. Image processing techniques such as demonstrate successful implementation of the proposed
color space transformation, morphologic operation, and method on the robot arm control.
3-D position measurement are used to identify the target
object and the gripper from the captured images and esti- Keywords Robot arm  Fuzzy control  Inverse
mate their relative positions. Based on the estimated po- kinematics  Image processing
sitions of the gripper and the target, the gripper can
approach and grasp the target using inverse kinematics.
However, since the robot arm’s accuracy of movement may 1 Introduction
be affected by gearbox backlash or hardware uncertainty,
the gripper might not approach the desired position with To control a robot arm, it is necessary that the positions of
precision using only inverse kinematics. Therefore, a fuzzy this controlled robot arm should be known at all times.
compensation method is added to correct any position Some external sensors such as the accelerometer and the
resolver are needed in order to estimate the positions of the
robot arm and its gripper. These sensors may be installed
& Rong-Jyue Wang on every actuator to measure the angles of the degrees of
[email protected] freedom (DOFs) as feedback signals. From the feedback
Jun-Wei Chang signals of the robot arm, the pose of the robot arm and the
[email protected] position of the gripper can be estimated. However, if the
Wen-June Wang robot arm has many DOFs, the number of sensors should
[email protected] be equivalent to or even more than the number of DOFs. In
Cheng-Hao Huang other words, if the robot arm has many DOFs, the cost for
[email protected] installing those external sensors will increase accordingly.
1
On the other hand, many studies regarding robot arm
Department of Electrical Engineering, National Central
University, No. 300, Jhongda Rd., Jhongli Dist.,
platforms use cameras to identify their targets and to
Taoyuan City 32001, Taiwan (ROC) monitor their workspaces. The paper [1] used a depth im-
2 age sensor called the Kinect to identify target objects and
Department of Electronic Engineering, National Formosa
University, No. 64, Wunhua Rd., Huwei Township, control a humanoid robot arm to grasp the target objects.
Yunlin County 632, Taiwan (ROC) The paper [2] identified an object using a SIFT algorithm
3
Robotic Automation Dept., Delta Electronics, Inc., 5F., No. and a monocular vision device mounted on the gripper. The
46, Keya Rd., Daya Dist., Taichung City 428, Taiwan (ROC) paper [3] identified elevator buttons and made a robot arm

123
194 International Journal of Fuzzy Systems, Vol. 17, No. 2, June 2015

to operate an elevator. Supposing that the monitoring estimates the angle of each joint on the robot arm using the
camera can recognize the position of the gripper and track inverse kinematic method. However, because the design
it, we will not need to install sensors on the robot arm, thus and assembly of the robot arm is not that precise, there is
reducing the total cost of the robot arm platform. Consid- visible backlash in the mechanism. This means that posi-
ering this cost reduction benefit, this study proposes a tion errors caused by backlash or hardware uncertainty
method which uses a stereo vision device to identify and should be considered when the robot arm moves. Herein,
locate the gripper and the target object, and estimate the fuzzy compensation method is used to deal with position
pose of the robot arm. errors. The concept of the fuzzy compensation method is to
There have been many studies on the subject of robot adjust the amount of robot arm movement using fuzzy
arm control using many different methods, the paper [4] logic. When the robot arm is close to the target, the com-
used interactive teaching to approximate a space of the pensation value is low, on the contrary, when the robot arm
knowledge-based grasp. The paper [5] presented an off-line is far from the target, the compensation value is high.
trajectory generation algorithm to solve the contouring Fusing the position value of the robot arm and compen-
problem of industrial robot arms. Using the Mitsubishi PA- sation values, we can obtain the new angle of each joint of
10 robot arm platform, the paper [6] proposed a harmonic the robot arm by the inverse kinematic and drive the motors
drive transmission model to investigate the gravity and to the calculated angle. Hence, using the stereo vision to
material influence on the robot arm. Then, the robot arm estimate the configuration of robot arm and the fuzzy
can be controlled to track a desired trajectory and the compensation method to reduce the position errors, the
motion error can be further analyzed. The paper [7] applied robot arm can accurately take its target object.
a self-configuration fuzzy system to find the inverse kine- This paper is organized as follows. Section 2 introduces
matics solutions for a robot arm. The paper [8] employed the experimental platform of this study. The principal
the inverse-kinematics-based two-nested control-loop stereo vision techniques, robot arm inverse kinematics
scheme to control the tip position using joint position and analysis, and fuzzy compensation are explained in Sects. 3,
tip acceleration feedback. The paper [9] proposed an ana- 4, and 5, respectively. The results of practical experiment
lytical methodology of inverse kinematics computation for and discussion are given in Sect. 6. Finally, the conclusion
a seven-DOF redundant robot arm with joint limits. Using is given in Sect. 7.
the inverse kinematics technique, the robot arm in [10] was
designed to push the buttons of an elevator. On the other
hand, the studies relating to position measurement and use 2 Description of Experimental Platform
of vision therein are described as follows. The paper [11]
combined a 2-D vision camera and an ultrasonic range In this paper, in order to implement an object-grasping task
sensor to estimate the position of the target object for the a platform for a robot arm with a stereo vision device is
robot gripper. The paper [12] used two cameras and one designed which contains a PC (laptop computer), a robot
laser to identify the elevator door and to determine its arm, a stereo vision device, and batteries, as shown in
depth distance. The papers [13–15] proposed photogram- Fig. 1. The utilized devices and their purposes are as de-
metric methods to measure the distances of objects using scribed below.
their features. The paper [16] effectively utilized color
images to achieve 3-D measurement using an RGB color 2.1 The Robot Arm
histogram. The paper [17] proposed an image-based 3-D
measuring system to measure distance and area using a The robot arm consists of the main body (four DOFs) and a
CCD camera and two laser projectors. The paper [18] gripper (one DOF) as shown in Fig. 2. Its main components
adopted two cameras and a laser projector to measure the are four SmartMotors, a planetary gearbox, three harmonic
edge of an object regardless of its position. drivers, two AX-12 motors, and some metal components.
In this paper, we propose an object-grasping method The specifications of the SmartMotor and the AX-12 are
using a stereo vision device and fuzzy control so that a shown in Tables 1 and 2, respectively. The SmartMotors
robot arm can accomplish an object-grasping task. The and the AX-12 motors communicate with PC through the
robot arm has no sensors installed on it, however, the stereo use of the serial ports (RS-232). The biggest torque motor,
vision device is set up beside the robot arm. Here, the SM3416D_PLUS, and a planetary gearbox are installed at
stereo vision device plays an important role in perceiving axis 1 to provide larger output torque, thus enabling the
the position of the robot arm from the feedback signals. robot arm to lift not only itself but also the target object.
Firstly, the stereo vision device is applied to identify the Based on many experiments, the robot arm can lift objects
position of the target object. Subsequently, the stereo vi- weighing up to about 2 kg. Since the load of axes 2–4 is
sion device traces the position of the robot arm and much smaller than that of axis 1, each remaining axis is

123
J.-W. Chang et al.: Implementation of an Object-Grasping Robot Arm… 195

Table 2 Specification of the AX-12 motor


Weight (g) 55
Gear reduction rate 1/254
Input voltage (V) At 7 At 10

Max torque (kgf cm) 12 16.5


Sec/60 0.269 0.196

Fig. 1 The robot arm experimental platform

Axis 4
Fig. 3 Structure of the gripper

Axis 3
AX-12 motors which include position, speed, and torque
through the serial ports (RS-232). Therefore, we can use
Axis 1 those status values to ascertain whether the gripper can
successfully grasp an object or not. For instance, when the
robot arm recognizes the target object and the gripper starts
Axis 2 to close, the ML motor’s torque value is positive and the
Fig. 2 Structure of the robot arm MR motor’s torque value is negative. If the values are in-
versed, it means the target object has been grasped. To
prevent objects from falling out of the gripper, two pieces
Table 1 Specifications of the SmartMotors of non-slip materials are pasted inside the grippers. A green
SM3416D_PLUS SM2315D mark is placed on each of the two sides of the gripper in
order to ease the identification of the gripper using the
Input voltage (VDC) 20–48 20–48
stereo vision device.
Maximum torque (N m) 1.6 0.3
Torque (N m) 1.09 0.19
2.2 The Stereo Vision Device
Speed (RPM) 3100 9000
Communication RS-232 RS-232 This stereo vision device consists of two Logitech
Weight (kg) 2.27 0.45 QuickCam Pro webcams as shown in Fig. 4. It is used to
capture stereo images with a resolution of 320 9 240
pixels at a rate of capture of 30 frames per second. The
captured images are transmitted to the laptop computer
composed of a SM2315D motor and a harmonic driver. via USB ports. The images are then used to identify, and
The gripper is composed of two AX-12 motors as shown in calculate the positions of, the object and the gripper in
Fig. 3. The PC receives the digitized feedback values of the 3-D space.

123
196 International Journal of Fuzzy Systems, Vol. 17, No. 2, June 2015

Fig. 5 Images a and b captured by the webcams WL and WR,


respectively

(or gripper) identification method consists of some image-


processing techniques such as color space transformation,
binarization, morphologic operations, and connected com-
Fig. 4 Stereo vision device ponents, and is used to find the target object in the captured
images. Next, the image geometry is used to calculate the
2.3 The Laptop Computer and Software position of the target object (or the gripper) in 3-D space.
The identification and positioning of the target object (or
In this study, a PC (laptop computer) is used as the control the gripper) are described as follows.
center. The CPU is an Intel Core 2 Dual P8600 which runs
at 2.4 GHz with 2 GB RAM. Borland C?? Builder is 3.1 Target Object Identification
chosen as the development software and is applied to im-
plement the robot arm control and stereo vision recognition The target object identification is based on the color of
algorithms. After the stereo vision device captures the object. In other words, a region is distinguished whether its
images, the software processes and binarizes the images to color is the same with object’s color. If the region’s color is
identify the target object. Subsequently, the center point of same with the object’s color, the region can be identified as
target object is designated as the gripper’s desired position. the target object in the image. It is noted that the captured
Then, the green marks on the gripper are identified from images use the RGB color model, but the particular color
the captured stereo vision images and a motion plan is region is not easily to be distinguished in this color space.
calculated to ensure that the robot arm can successfully Hence, a color model transformation (1)–(3) is needed in
move the gripper to the goal position. Finally, the gripper which the RGB color model is transformed into the HSV
grasps the target object. (HSV means hue, saturation, and value) color model [19].
V ¼ maxðR; G; BÞ; ð1Þ
2.4 The Batteries 8
< ½V  minðR; G; BÞ  255
; if V 6¼ 0;
S¼ V ð2Þ
The batteries consist of a Li-ion battery pack and two lead– :
0; else;
acid batteries. The lead–acid batteries provide the 24 V 8
power for the SmartMotors by cascading two lead–acid >
> ðG  BÞ  60
>
> ; if V ¼ R;
batteries (a single lead–acid battery can provide 12 V of >
< S
output power). The output voltage of the Li-ion battery ðB  RÞ  60
H¼ þ 180; if V ¼ G; ð3Þ
pack is 7.4 V, and it provides the power for the AX-12 >
> S
>
>
motors. : ðR  GÞ  60 þ 240; if V ¼ B:
>
S
According to the color of the target object, the two
3 Stereo Vision Method for Object Position images of the HSV color space are binarized into two bi-
Measurement nary images by setting suitable interval values of HSV.
Then, morphologic operators [20] such as erosion, dilation,
The two webcams, WR and WL, respectively, capture sev- and connected component are used to filter the noise as
eral pairs of images (see Fig. 5) as the input images of the shown in Fig. 6. To mark the target object, the connected
stereo vision. Based on those images, two tasks should be component is applied to find the center of gravity of the
achieved; one is the target object identification and the target object and its coordinate value in those images,
other is the object position measurement. The target object namely, (xL, yL) and (xR, yR). The identification result of a

123
J.-W. Chang et al.: Implementation of an Object-Grasping Robot Arm… 197

object plane. OL and OR are the center pixels of the WL


image and the center pixel of the WR image, respectively. L
indicates the distance between the two webcams. The
hardware parameters are fL and fR which indicate the two
webcam’s focal lengths. T point (xt, yt, zt) is the position of
the target object and its 3-D coordinates. Once this system
is defined, we can use the image geometry [21] to measure
the position of the target object from the WL and WR im-
ages as shown in Eqs. (4)–(6). The obtained position T(xt,
Fig. 6 The binarized and filtered images of Fig. 5 yt, zt) can be considered the goal point of the robot arm.
target object is shown in Fig. 7, in which the red points L
zt ¼ ; ð4Þ
indicate the center of gravity of the target object. Those jxL j=fL þ jxR j=fR
two coordinate values for the center of gravity of the target  
zt xL xR
object will be used to measure the target object’s position xt ¼ þ ; ð5Þ
2 fL fR
in 3-D space.
y L  zt
yt ¼ : ð6Þ
3.2 Target Object Position Measurement fL
Then, the vision origin Ow is assigned as the world
In order to measure the target object’s position in 3-D origin O which means Ow = O, in which the world coor-
space, we first need to define the coordinates of the robot dinate is defined in Cartesian coordinate. However, due to
arm upon which the object-grasping system is constructed. the fact that the vision device is capable of rotation, there
We define the coordinates of this system as shown in exists a three-phase difference in angle between the vision
Fig. 8, in which CR and CL are the positions in 3-D space and the world coordinates, which can be divided into a
of the two webcams WR and WL, respectively, and Ow is pitch angle wx, a yaw angle wy, and a roll angle wz as
the origin point of this system which is the center point shown in Fig. 9.
between the two webcams. Ot is the center point of the Hence, the three-phase rotation matrices are used to
transform the stereo vision coordinate into the world co-
ordinates as in Eq. (7) [22].
2 3 2 32 3
x cos wx sin wx 0 cos wy 0 sin wy
6 7 6 76 0 1 0 7
4 y 5 ¼ 4  sin wx cos wx 0 54 5
z 0 0 1  sin wy 0 cos wy
2 32 3
1 0 0 xt
6 76 7
 40 cos wz sin wz 54 yt 5: ð7Þ
0  sin wz cos wz zt

Fig. 7 The center of gravity of the target object from the images of After the transformation, the positions (xt, yt, zt) of the
the webcam WL and WR stereo vision coordinate can be transformed into the

T Ot2
Ot L
CR
yt
Ot1 xt zt fR
S
OW OR (xR,yR)
Objectplane
CL
fL
OL WR image
Y

Z X (xL,yL)
WL image

Fig. 8 The coordinates of this system

123
198 International Journal of Fuzzy Systems, Vol. 17, No. 2, June 2015

(a)

(b)

Fig. 9 The different rotary directions between the stereo vision and
world coordinates

positions (x, y, z) of the world coordinate. In addition, since


there are no sensors such as resolvers or encoders on the
robot arm with which to measure the rotation angles of all (c)
axes and estimate the position of the gripper, a similar
method of object position measurement is applied to find
the position of the gripper.

4 Robot Arm Inverse Kinematics Analysis

To make the robot arm and its gripper move to a desired Fig. 11 Further geometric analysis for the robot arm. a Links 1 and 2
position, we must first perform the inverse kinematics of the robot arm, b Xr–W plane and c Yr–Zr plane
analysis. Figure 10 shows the linking structure of the robot pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
arm, in which the shoulder joint is Or and is the origin point two-link arm plane, where L ¼ x2r þ y2r þ z2r is the dis-
of the inverse kinematics coordinate. Q is the elbow joint, tance between the gripper Gr and Or. The elbow joint angle
and Gr(xr, yr, zr) is the position of gripper. Or Q and QGr h3 is obtained as follows.
indicate links 1 and 2 of the robot arm, respectively, the h3 ¼ p  a; ð8Þ
lengths of which are d1 and d2, respectively. In addition,
there are three rotation angles; h1, h2, and h3, on each of the where
Or and Q points. For further geometric derivations, links 1  2 
d1 þ d22  L2
and 2 are projected on the Yr–Zr plane (see the green line on a ¼ cos1 : ð9Þ
2d1 d2
Fig. 10), where R and S denote that the projective points of
Gr and Q, respectively. The axis W is a referenced axis On the Xr–W plane as shown in Fig. 11b, the lifting
which extends from the projected link 1 on the Yr–Zr plane. movement joint angle h2 on Or point can be derived as
Let us provide three figures to introduce the deviations  
1 xr
of the kinematics of the robot arm. Figure 11a shows a h2 ¼ sin : ð10Þ
d1 þ d2 cos h3
Furthermore, Fig. 11c depicts the projected arm on the Yr–
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Zr plane, where L0 ¼ y2r þ z2r : Consequently, the shoulder
rotation joint angle h1 is obtained.
h1 ¼ b þ c; ð11Þ
where
!
1 L02 þ ððd1 þ d2 cos h3 Þ cos h2 Þ2  ðd2 sin h2 Þ2
b ¼ cos :
2L0 ððd1 þ d2 cos h3 Þ cos h2 Þ
ð12Þ
Fig. 10 Geometry of the robot arm and

123
J.-W. Chang et al.: Implementation of an Object-Grasping Robot Arm… 199

Gact
Ggr

Fig. 13 Position error of the gripper

Table 3 Fuzzy rule table of error compensation on the x-axis


ex NB NS ZO PS PB
rx NB NS ZO PS PB

between Ggr and Gact. Hence, this section presents a fuzzy


control technique to compensate for position errors when
Fig. 12 Rotation angle h4 of the gripper using the stereo vision device. When the gripper is moved
to a predicted position, the stereo vision device measures
y  the gripper’s actual position as feedback and then calcu-
r
c ¼ sin1 : ð13Þ lates the position error. Subsequently, the fuzzy controller
L0
is used to reduce the position error. The fuzzy controller is
The gripper’s grasping angle is controlled by a rotation described as follows.
angle h4, so that h4 must be adjusted to accord with that the To compensate for errors in the position, the deviation
two pincers are perpendicular to the Xr–Yr plane as shown vector between Gact and Ggr is calculated as shown in
in Fig. 12. h4 can be obtained from h2. Since the position of Eq. (16), where D is the deviation and, ex, ey, and ez
the gripper Gr is the terminal point of robot arm’s motion, indicate deviation on the x-, y-, and z-axis, respectively.
when axis 2 of the robot arm rotates to angle h2, the gripper
should rotate to an inverse angle to cancel out the depen- D ¼ Gact  Ggr
  ð16Þ
dent rotation. Therefore, we can get the h4 as shown in ¼ ex ; ey ; ez :
Eq. (14). It is noted that the dual pincer gripper will fail to
Then, let us describe the design of the fuzzy control for
grasp any object if the two pincers are not perpendicular to
error compensation using the deviation on the x-axis first.
the X–Y plane.
To compensate for the x-axis error, the fuzzy control rule
h4 ¼ h2 : ð14Þ table is designed as Table 3, in which ex is the deviation
Since the stereo vision and the robot arm are not at the value on the x-axis and denotes the premise part; rx is the
same coordinate, to successfully grasp a target object, the compensation factor and denotes the consequent part. Both
coordinates of the stereo vision are transformed into the ex and rx are decomposed into five fuzzy sets, including
coordinate of the robot arm as follows. negative big (NB), negative small (NS), zero (ZO), positive
small (PS), and positive big (PB). Figure 14 shows the
Ggr ðx; y; zÞ ¼ ðOr  OÞ þ ðxr ; yr ; zr Þ; ð15Þ
membership functions of the premise part and the conse-
where Ggr is the position of the gripper in the world co- quent part.
ordinate, (Or - O) is the distance from the world origin to In Fig. 14, aix (i = 1, 2,…,5) and uix (i = 1, 2,…,5) are
robot arm origin. the parameters of the two membership functions. Consid-
ering the structure of the robot arm in this study, the pa-
rameters of the premise parts and consequent parts are
5 Fuzzy Control for Position Error Compensation assigned as Tables 4 and 5, respectively, to deal with the
object-grasping task. Due to the structure defect of the real
Practically, the motor backlash problems and the hardware robot arm, its backlash is about 5 which cause the position
uncertainties may affect the 3-D position accuracy of the error of the gripper will be around ±5.5 cm. Furthermore,
inverse kinematics technique, i.e., the gripper can not ac- it is hard to have exact mathematical equations to express
curately approach a desired position using only the inverse the model of the robot arm. Therefore, those parameters in
kinematics method as shown in Fig. 13. In Fig. 13, the Gact the premise part are set according to the experience of a lot
point indicates the actual position of the gripper which has of experiments. Since the output of the fuzzy control is the
had its movement guided by the stereo vision device and compensation factors, the consequent parts should be very
the inverse kinematics method, and the Ggr point indicates small. Therefore, after many experiments to adjust those
the desired position of the gripper. There is a position error parameters of the consequent fuzzy sets, we have Table 5.

123
200 International Journal of Fuzzy Systems, Vol. 17, No. 2, June 2015

(a) (b)

Fig. 14 Membership functions of the a premise part and b consequent part

Table 4 Parameters of the same reason, the compensation factors ry and rz can be
i 1 2 3 4 5
premise parts found in a similar manner. Finally, the compensation
aix -5 -3 0 3 5 factors ri (i = x, y, z) are used to adjust the error as
aiy -5.5 -1.5 0 1.5 5.5 follows.
aiz -5 -2 0 2 5
ui ¼ ri  ei ; ð18Þ
where ui are the compensation values which are used to
compensate for the position error as Eq. (19).
Table 5 Parameters of the consequent parts  
Gcom ¼ Gact þ ux ; uy ; uz W ; ð19Þ
i 1 2 3 4 5
where Gcom is the new position of the gripper for ap-
uix -0.6 -0.4 0 0.4 0.6 proaching to the desired position.
uiy -0.65 -0.5 0 0.5 0.65 Overall, the error compensation for the robot arm with
uiz -0.65 -0.5 0 0.5 0.65 fuzzy controller is shown in Fig. 15. The stereo vision
device captures the actual position of robot arm (Gact) and
calculates the deviation values (D). These deviation values
After the parameters are assigned, the center average
are as the input into the fuzzy controller to obtain the
defuzzification method is used to obtain the position
compensation values for the robot arm (rx, ry, and rz).
compensation factor rx as follows.
Subsequently, the compensation values are used to get the
P5 i i
i¼1 ux  B ðex Þ
new position of the robot arm (Gcom) by inverse kinematics
rx ¼ P 5
; ð17Þ and then the gripper moves to the new position. The
i
i¼1 B ðex Þ
compensation process will be terminated until |ei| B qi
where uix is the center value of the consequent part and (i = x, y, z) are satisfied, where qi is an acceptable error
Bi(ex) is the membership degree of the premise part. For the threshold of the gripper position.

Fig. 15 Fuzzy controller for error compensation

123
J.-W. Chang et al.: Implementation of an Object-Grasping Robot Arm… 201

6 Experimental Results and Discussion

This section demonstrates a target object-grasping task


which drives the robot arm to grasp and raise a thermos. To
implement the experiment, some hardware parameters
should be assigned as follows. The lengths of two links are
d1 = 17 cm and d2 = 26 cm (see Fig. 10). The target ob-
ject to be grasped in the experiment is a blue thermos. The
distance between the webcams WL and WR is 3 cm and the
focal lengths correspond to fL = fR = 265 pixels. The ro-
tation between the world coordinate and the stereo vision
coordinate is (wx, wy, wz) = (27, 0, 60). The error
thresholds are assigned as qx = 2.5 cm, qy = 1.5 cm, and
qz = 1.5 cm in this experiment. Since the thermos’s di-
ameter of 7 cm is much smaller than the open range of the
gripper, the error thresholds are acceptable. Figure 16
shows the whole process of the robot arm grasping the
thermos. Table 6 shows the gripper position and its posi-
tion errors. Firstly, the stereo vision system measures the
position of the thermos which is TVD (11.52, -8.5,
41.98 cm) as shown in Fig. 16a. Then, the robot arm po-
sitions its gripper around the thermos as shown in Fig. 16b,
and the iteration count i is 0 as shown in Table 6. Subse-
quently, Figure 16c, d shows that the robot arm compen-
sates for its position errors, and the iteration counts i are 1
and 2 as in Table 6. When the gripper moves to a new
position, the stereo vision device will measure the actual
position of gripper and the fuzzy control is used to com-
pensate for the position errors until the position errors
satisfy the accuracy conditions (see Fig. 16c, d). The it-
eration count i starts from 0 and it can be considered a
count of the instances of position compensation. In other
words, when i = 0, the robot arm moves to the desired
position using only inverse kinematics method without
fuzzy position error compensation. i = 1 indicates that the
robot arm uses fuzzy control to compensate the position
error once. In this experiment, the robot arm compensates
the position errors twice (i = 2). In Table 6, it is seen that
the position error is reduced when using fuzzy control to
compensate the gripper’s position error. As a result of the
compensation, the gripper is then in the proper grasping
location (12.99, -9.13, 40.52 cm). Figure 17 shows that
the position errors ex, ey, and ez are reduced to within the
error thresholds qx, qy, and qz in the iteration i = 2. Fi- Fig. 16 Experimental result of the robot arm grasping a thermos
nally, the robot arm successfully grasps the thermos and
raises it as shown in Fig. 16e, f. Table 6 Position of gripper and position errors
In order to examine the robot’s grasping ability, we
ðiÞ
placed the thermos in five different locations (A–E) in the
i Gact (cm) exðiÞ (cm) eðiÞ
y (cm) ezðiÞ (cm)
workspace as shown in the first row of Tables 7, 8, 9, 10, 0 (11.36, -7.43, 37.26) -0.17 1.06 -4.72
and 11. The following rows of Tables 7, 8, 9, 10, and 11 1 (14.68, -10.15, 42.82) 3.15 -1.65 0.83
show the gripper position errors for each iteration count. 2 (12.99, -9.13, 40.52) 1.46 -0.63 -1.46
Figures 18, 19, 20, 21, and 22 illustrate the position errors

123
202 International Journal of Fuzzy Systems, Vol. 17, No. 2, June 2015

Table 10 Thermos at location D


Xd (cm) Yd (cm) Zd (cm)

Position of thermos 6.16 -8.5 40.85


i ex (cm) ey (cm) ez (cm)

Position errors of the 0 1.43 -6.64 0.33


gripper for each iteration 1 0.26 -2.73 1.21
and axis
2 0.62 -1.38 1.62
3 0.33 -1.06 1.87
4 1.02 0.38 0.58

Fig. 17 Position error for each iteration count


Table 11 Thermos at location E
Xd (cm) Yd (cm) Zd (cm)
Table 7 Thermos at location A
Position of thermos 11.83 -8.5 42.46
Xd (cm) Yd (cm) Zd (cm)
i ex (cm) ey (cm) ez (cm)
Position of thermos 10.35 -8.5 42.1
Position errors of the gripper 0 -0.27 1.37 -6.13
i ex (cm) ey (cm) ez (cm) for each iteration and axis 1 0.3 0.48 0.55
Position errors of the gripper 0 1.7 -0.78 -4.73
for each iteration and axis 1 0.23 1.63 -2.36
2 -1.12 2.8 -3.45
3 0.17 -0.08 -0.71
for each iteration count. As can be seen, the experiment
results reveal that the robot arm can successfully approach
the thermos, since all position errors are smaller than the
Table 8 Thermos at location B given threshold values (qx = 2.5 cm, qy = 1.5 cm and
Xd (cm) Yd (cm) Zd (cm)
qz = 1.5 cm) in the final iteration. After the robot arm
reaches and grasps the thermos, the robot arm raises the
Position of thermos 7.11 -8.5 41.24 thermos approximately 10 cm to ensure that the thermos is
i ex (cm) ey (cm) ez (cm) assuredly caught and then the grasping task is finished.
However, the error compensation processing may increase
Position errors of the gripper 0 1.85 -0.48 -5.72 the value of each position error of the gripper. Comparing
for each iteration and axis 1 0.23 0.63 -2.72 the position errors between the first and second iteration,
2 0.07 1.81 -1.91 the first position errors in the first iteration are smaller than
3 0.74 1.24 -0.03 that of second iteration. Because of the recoil from the
4 -0.38 1 -2.11 gearbox and the inertia produced by the movement of the
5 0.5 0.78 -0.15 robot arm, the adjustment exceeds the compensation val-
ues. Similar situations are shown in Fig. 19 when the it-
eration count is 4 and in Fig. 21 when the iteration is 3.
But, in these instances there is still a decrease in the po-
sition errors on the following iteration when compared to
Table 9 Thermos at location C
the previous iteration. In brief, the position errors tend to
Xd (cm) Yd (cm) Zd (cm) decrease regardless of the gearbox recoil or the inertia
Position of thermos 11.82 -8.5 42.46
produced by the robot arm moving. Hence, the proposed
method can reliably control the robot arm so as to effec-
i ex (cm) ey (cm) ez (cm)
tively grasp an object.
Position errors of the gripper 0 -0.26 1.37 -6.13 In Tables 6, 7, 8, 9, and 10, it has to be emphasized that
for each iteration and axis 1 1.01 0.48 0.55 when i = 0, the position error is achieved only using in-
verse kinematics without fuzzy control, but after i = 1, the

123
J.-W. Chang et al.: Implementation of an Object-Grasping Robot Arm… 203

Fig. 18 Position errors when the thermos is at location A Fig. 21 Position errors when the thermos is at location D

Fig. 19 Position errors when the thermos is at location B Fig. 22 Position errors when the thermos is at location E

grasping task for robot arm. Since it used the encoder’s


signal as the feedback, the accumulated error may occur
after a sequence of movement. The paper [24] adopted
extended Kalman filtering algorithm to identify the
geometric parameters of the model of the robot arm, and
then used the artificial neural network to compensate the
position errors. In the proposed method, we do not need the
mathematic model of the robot arm and any expensive
sensor to measure the position of the robot. All we need are
the 3-D position measurement by the stereo version and
inverse kinematics with fuzzy control for error compen-
sation. The paper [25] also used stereo cameras to calculate
the correction vector by the difference between the robot
arm’s end-effector from inverse kinematics and vision.
Fig. 20 Position errors when the thermos is at location C Then, the robot arm compensates its position error ac-
cording to the correction vector. However, their compen-
position error is reduced continuously because of the aids sation value is obtained by some calculation; it still may
of fuzzy control compensation. have a little error because of the real mechanism uncer-
Herein, we have some discussion to compare the pro- tainty such as backlash or friction. In our method, the robot
posed method with the papers [23–25]. The paper [23] arm can adjust the compensation values by fuzzy control
proposed a PID control algorithm and implemented a ball- method all the way.

123
204 International Journal of Fuzzy Systems, Vol. 17, No. 2, June 2015

Furthermore, the proposed method has two advantages 3. Kim, H.-H., Kim, D.-J., Park, K.-H.: Robust elevator button
which are low cost and high efficiency. Low cost is that recognition in the presence of partial occlusion and clutter by
specular reflections. IEEE Trans. Ind. Electron. 59(3), 1597–1611
only a stereo vision device is needed as the sensor. High (2012)
efficiency is that using fuzzy control to compensate the 4. Aleotti, J., Caselli, S.: Interactive teaching of task-oriented robot
error at final step. Therefore, any uncertainty and backslash grasps. Robot. Auton. Syst. 58(5), 539–550 (2010)
problems are easy to be dealt with for the movement of the 5. Munashnghe, S.R., Nakamura, M., Goto, S., Kyura, N.: Optimum
contouring of industrial robot arms under assigned velocity and
robot arm. torque constraints. IEEE Trans. Syst. Man Cybern. C 31(2),
159–167 (2001)
6. Kennedy, C.W., Desai, J.P.: Modeling and control of the Mit-
7 Conclusion subishi PA-10 robot arm harmonic drive system. IEEE ASME
Trans. Mechatron. 10(3), 263–274 (2005)
7. Shen, W., Gu, J., Milios, E.E.: Self-configuration fuzzy system
This study has described a method to achieve an object- for inverse kinematics of robot manipulators. In: Proceedings of
grasping task using only a stereo vision device to trace and the Annual Meeting of the North American Fuzzy Information
guide the motion of the robot arm. The stereo vision device Processing Society, Montreal, QC, Canada, June 2006, pp. 41–45
8. Feliu, V., Somolinos, J.A., Garcia, A.: Inverse dynamics based
identifies the position of the target object and gripper using control system for a three-degree-freedom flexible arm. IEEE
color space transformation, morphologic operation, and Trans. Robot. Autom. 19(6), 1007–1014 (2003)
3-D position measurement. After obtaining those positions, 9. Shimizu, M., Kakuya, H., Yoon, W.-K., Kitagaki, K., Kosuge, K.:
there is usually a position error between the gripper and Analytical inverse kinematics computation for 7-DOF redundant
manipulators with joint limits and its application to redundancy
target, due to recoil from the gearbox and inertia produced resolution. IEEE Trans. Robot. 24(5), 1131–1142 (2008)
by the movement of the robot arm. In order to compensate 10. Wang, W.-J., Huang, C.-H., Lai, I.-H., Chen, H.-C.: A robot arm
for these errors, the fuzzy compensation method is pro- for pushing elevator buttons. In: Proceedings of SICE Annual
posed to generate compensation values for each axis. The Conference, Taipei, Taiwan, August 2010, pp. 1844–1848
11. Nilsson, A., Holmberg, P.: Combining a stable 2-D vision camera
method is designed according to a principle that when the and an ultrasonic range detector for 3-D position estimation.
error in position is small, the movement required to com- IEEE Trans. Instrum. Meas. 43(2), 272–276 (1994)
pensate for that error is also relatively small. Then, fuzzy 12. Baek, J.-Y., Lee, M.-C.: A study on detecting elevator entrance
compensation method integrates the compensation values door using stereo vision in multi floor environment. In: Pro-
ceedings of ICROS-SICE International Joint Conference,
and inverse kinematics to estimate and drive the gripper to Fukuoka, Japan, August 2009, pp. 1370–1373
a new compensative position. 13. Fraser, C.S., Cronk, S.: A hybrid measurement approach for
Several experiments are given to demonstrate the im- close-range photogrammetry. ISPRS J. Photogramm. Remote
plementation of the proposed object-grasping method. The Sens. 64(3), 328–333 (2009)
14. van den Heuvel, F.A.: 3D reconstruction from a single image
fuzzy error compensation regulates the position of the using geometric constraints. ISPRS J. Photogramm. Remote
gripper until the position error satisfies acceptable error Sens. 53(6), 354–368 (1998)
thresholds. Therefore, the robot arm can successfully ap- 15. Zhang, D.-H., Liang, J., Guo, C.: Photogrammetric 3D mea-
proach the target object and raise it under the guidance of surement method applying to automobile panel. In: Proceedings
of the 2nd International Conference on Computer and Automa-
the stereo vision device in all the experiments. tion Engineering (ICCAE), Singapore, February 2010, pp. 70–74
The benefit of using the proposed method is that the 16. Egami, T., Oe, S., Terada, K., Kashiwagi, T.: Three dimensional
robot arm does not need external sensors such as ac- measurement using color image and movable CCD system. In:
celerometers or resolvers to measure the degree of rotation Proceedings of the 27th Annual Conference on IEEE Industrial
Electronics Society, Denver, Colorado, USA, November 2001,
on each axis. Thus, the cost for building the robot arm pp. 1932–1936
platform can be reduced. 17. Hsu, C.-C., Lu, M.-C., Wang, W.-Y., Lu, Y.-Y.: Three-dimen-
sional measurement of distant objects based on laser-projected
Acknowledgments The authors like to thank the Ministry of Sci- CCD images. IET Sci. Meas. Technol. 3(3), 197–207 (2009)
ence and Technology of Taiwan for its support under Contracts 18. Aguilar, J.J., Torres, F., Lope, M.A.: Stereo vision for 3D mea-
MOST 103-2221-E-008-001-. surement: accuracy analysis, calibration and industrial applica-
tions. Measurement 18(4), 193–200 (1996)
19. Feng, L., Xiaoyu, L., Yi, C.: An efficient detection method for
rare colored capsule based on RGB and HSV color space. In:
References IEEE International Conference on Granular Computing, No-
boribetsu, Japan, , October 2014, pp. 175–175
1. Song, K.-T., Tsai, S.-C.: Vision-based adaptive grasping of a 20. Laganière, R.: OpenCV2 Computer Vision Application Pro-
humanoid robot arm. In: Proceedings of the IEEE International gramming Cookbook. Packt Publishing, Birmingham (2011)
Conference on Automation and Logistics, Zhengzhou, China, 21. Jain, R., Kasturi, R., Schunk, B.G.: Machine Vision. McGraw-
August 2012, pp. 155–160 Hill, New York (1995)
2. Yang, Y., Cao, Q.-X.: Monocular vision based 6D object local- 22. Kim, B.S., Lee, S.H., Cho, N.I.: Real-time panorama canvas of
ization for service robot’s intelligent grasping. Comput. Math. natural images. IEEE Trans. Consum. Electron. 57(4), 1961–1968
Appl. 64(5), 1235–1241 (2012) (2011)

123
J.-W. Chang et al.: Implementation of an Object-Grasping Robot Arm… 205

23. Su, J., Zhang, Y.: Integration of a plug-and-play desktop robotic 25. DiCicco, M., Bajracharya, M., Nickels, K., Backes, P.: The EPEC
system. Robotica 27, 403–409 (2009) algorithm for vision guided manipulation: analysis and valida-
24. Nguyen, H.-N., Zhou, J., Kang, H.-J.: A calibration method for tion. In: Proceedings of the IEEE Aerospace Conference, Big
enhancing robot accuracy through integration of an extended Sky, Montana, March 2007, pp. 1–11
Kalman filter algorithm and an artificial neural network. Neuro-
computing 151, 996–1005 (2015)

123

You might also like