Appendix A. Linear Algebra
Appendix A. Linear Algebra
Appendix A. Linear Algebra
Linear Algebra
Since modelling and control of robot manipulators requires an extensive use of matrices
and vectors as well as of matrix and vector operations, the goal of this appendix is to
provide a brush-up of linear algebra.
A.I Definitions
A matrix of dimensions (m x n), with m and n positive integers, is an array of
elements aij arranged into m rows and n columns:
o
the matrix is said to be lower triangular if aij 0 for i < j.=
An (n x n) square matrix A is said to be diagonal if aij = 0 for i :/= j, i.e.,
o
~ ] =diag{all,azz, ... ,ann}.
o ann
1 According to standard mathematical notation, small boldface is used to denote vectors while
capital boldface is used to denote matrices. Scalars are denoted by roman characters.
336 Modelling and Control of Robot Manipulators
If an (n x n) diagonal matrix has all unit elements on the diagonal (aii 1), the matrix =
is said to be identity and is denoted by In 2• A matrix is said to be null if all its elements
are null and is denoted by O. The null column vector is denoted by O.
The transpose AT of a matrix A of dimensions (m x n) is the matrix of dimensions
(n x m) which is obtained from the original matrix by interchanging its rows and
columns:
all a21
12
AT = a. a22
(A.2)
[
aln a2n
a1n
a~n
1 .
ann
An (n x n) square matrix A is said to be skew-symmetric if AT = -A, and thus
aij= -aji for i :/= j and aii = 0, leading to
[ -~"
...
A= .
a12
0 ... a,. 1
a2n
ar
A= [ail
..
aTm
2 Subscript n is usually omitted if the dimensions are clear from the context.
Linear Algebra 337
Two matrices A and B of the same dimensions (m x n) are equal if aij = bij. If
A and B are two matrices of the same dimensions, their sum is the matrix
C=A+B (A.4)
whose elements are given by Cij = aij + bij. The following properties hold:
A+O=A
A+B=B+A
(A + B) + C = A + (B + C).
Notice that two matrices of the same dimensions and partitioned in the same way can
be summed formally by operating on the blocks in the same position and treating them
like elements.
The product of a scalar 0: by an (m x n) matrix A is the matrix o:A whose elements
are given by o:aij' If A is an (n x n) diagonal matrix with all equal elements on the
diagonal (aii = a), it follows that A = a/no
If A is a square matrix, one may write
A = As + Aa (A.5)
where
1
As= (A+AT) (A.6)
2
is a symmetric matrix representing the symmetric part of A, and
(A.7)
C=AB (A.8)
338 Modelling and Control of Robot Manipulators
whose elements are given by Cij = L~=1 aikbkj. The following properties hold:
A = Alp = ImA
A(BC) = (AB)C
A(B+C) =AB+AC
(A + B)C = AC + BC
(AB)T = BT AT.
Notice that, in general, AB :/= BA, and AB = 0 does not imply that A = 0 or
B =0; further, notice that AC =
BC does not imply that A B. =
If an (m x p) matrix A and a (p x n) matrix B are partitioned in such a way that
the number of blocks for each row of A is equal to the number of blocks for each
column of B, and the blocks Aik and Bkj have dimensions compatible with product,
the matrix product AB can be formally obtained by operating by rows and columns
on the blocks of proper position and treating them like elements.
For an (n x n) square matrix A, the determinant of A is the scalar given by the
following expression, which holds 'Vi = 1, ... , n:
n
det(A) = I:>ij( -1)i+ j
det(A(ij)). (A.9)
j=l
The determinant can be computed according to any row i as in (A.9); the same result
is obtained by computing it according to any column j. If n = 1, then det( all) = all'
The following property holds:
det(A) = det(AT).
As a consequence, if a matrix has two equal columns (rows), then its determinant is
null. Also, it is det(aA) = andet(A).
Given an (m x n) matrix A, the determinant of the square block obtained by
selecting an equal number k of rows and columns is said to be k-order minor of
matrix A. The minors obtained by taking the first k rows and columns of A are said
to be principal minors.
If A and B are square matrices, then
n
det(A) = II aii·
i=1
Linear Algebra 339
m
det(A) IT det(A ii).
i=l
a(A) :::;min{m,n}
p(A) = g(AT)
p(AT A) = p(A)
p(AB) :::;min{p(A), p(B)}.
A-I _ 1
- det(A) Adj A. (A. 12)
(A-I)-I = A
(AT)-l = (A-1)T.
(A.B)
(A. 14)
(A. 15)
340 Modelling and Control of Robot Manipulators
Given n square matrices Aii all invertible, the following expression holds:
(A.16)
[~~rl =
[~~rl =
. d
A(t) = -A(t) [d
= -aij(t) ] _ 1 m . (A.l7)
dt dt ~- , ,
j 1, ,n
(A.18)
Given a scalar function f(x), endowed with partial derivatives with respect to the
elements Xi of the (n x 1) vector x, the gradient of function f with respect to vector x
is the (n x 1) column vector
(A.19)
Linear Algebra 341
. d of T
f(x) = dtf(x(t)) = ax x = gradxf(x)x. (A.20)
Given a vector function g(x) of dimensions (m x 1), whose elements gi are differ-
entiable with respect to the vector x of dimensions (n x 1), the Jacobian matrix (or
simply Jacobian) of the function is defined as the (m x n) matrix
Ogl (x)
ax
Og2(X)
Jg(x) = og(x) ax (A.21)
ax =
holds only when all the constants ki vanish. A necessary and sufficient condition for
the vectors Xl, X2 ... ,Xn to be linearly independent is that the matrix
has rank n; this implies that a necessary condition for linear independence is that
n ~ m. If instead g(A) = r < n, then only r vectors are linearly independent and
the remaining n - r vectors can be expressed as a linear combination of the previous
ones.
A system of vectors X is a vector space on the field of real numbers lR if the
operations of sum of two vectors of X and product of a scalar by a vector of X have
values in X and the following properties hold:
342 Modelling and Control of Robot Manipulators
x+y=y+x \fx,y E X
(x + y) + Z = x + (y + z) \fx,y,z E X
30 EX: x +0 = X \fx E X
\fXEX, 3(-x)EX:x+(-x)=0
Ix = x \fx E X
a(j3x) = (aj3)x \fa,j3 E IR \fx E X
(a+j3)x=ax+j3x \fa,j3 E IR \fx E X
a(x + y) = ax + ay \fa E IR \fx,y E X.
The dimension of the space dim(X) is the maximum number of linearly indepen-
dent vectors x in the space. A set {Xl, X2, ... ,xn} of linearly independent vectors is
a basis of vector space X, and each vector y in the space can be uniquely expressed
as a linear combination of vectors from the basis:
(A.23)
where the constants CI , C2, ••• , Cn are said to be the components of the vector y in the
basis {Xl, X2,· .. ,xn}.
A subset Y of a vector space X is a subspace Y ~ X if it is a vector space with
the operations of vector sum and product of a scalar by a vector, i.e.,
Two vectors are said to be orthogonal when their scalar product is null:
(A.25)
hold. A unit vector X is a vector whose norm is unity, i.e., xT x = 1. Given a vector
x, its unit vector is obtained by dividing each component by its norm:
, 1
x= IIxlix. (A.29)
A typical example of vector space is the Euclidean space whose dimension is 3; in this
case a basis is constituted by the unit vectors of a coordinate frame.
The vector product of two vectors x and y in the Euclidean space is the vector
(A.30)
xxx=O
x x y = -y x x
x x (y + z) = x x y +x x z.
The vector product of two vectors x and y can be expressed also as the product of
a matrix operator S(x) by the vector y. In fact, by introducing the skew-symmetric
matrix
S(x) =[ ~3 (A.31)
-X2
S(x)x = ST(X)X = 0
S(ax + (3y) = as(x)
+ (3S(y)
S(x)S(y) T T
= yx - x yI
S(S(x)y) = yxT - xyT.
Given three vector x, y, z in the Euclidean space, the following expressions hold
for the scalar triple products:
(A.33)
If any two vectors of three are equal, then the scalar triple product is null; e.g.,
344 Modelling and Control of Robot Manipulators
in terms of the matrix A of dimensions (m x n). The range space (or simply range)
of the transformation is the subspace
On the other hand, the null space (or simply null) of the transformation is the subspace
N(A) = {x : Ax = 0, x E X} ~ X. (A.37)
(A.39)
For the homogeneous system of equations in (A.44) to have a solution different from
the trivial one u=0, it must be
det(..\I - A) = 0 (A.45)
which is termed characteristic equation. Its solutions "\1, ... ,..\n are the eigenvalues
of matrix A; they coincide with the eigenvalues of matrix AT. On the assumption of
distinct eigenvalues, the n vectors Ui satisfying the equation
A=UTAU; (A.48)
(A.49)
Q(x) = xT Ax (A.50)
xT Ax = 0 \:Ix.
\:Ix:/= 0
(A.51)
x =0.
The matrix A core of the form is also said to be positive definite. Analogously, a
quadratic form is said to be negative definite if it can be written as -Q(x) = _xT Ax
where Q (x) is positive definite.
A necessary condition for a square matrix to be positive definite is that its elements
on the diagonal are strictly positive. Further, in view of (A.48), the eigenvalues of a
positive definite matrix are all positive. If the eigenvalues are not known, a necessary
and sufficient condition for a symmetric matrix to be positive definite is that its
principal minors are strictly positive (Sylvester's criterion). It follows that a positive
definite matrix is full-rank and thus it is always invertible.
A symmetric positive definite matrix A can always be decomposed as
A= U T AU, (A.52)
(A.53)
\:Ix. (A.54)
This definition implies that [J(A) = r < n, and thus r eigenvalues of A are positive
and n - r are null. Therefore, a positive semi-definite matrix A has a null space of finite
Linear Algebra 347
dimension, and specifically the form vanishes when x E N(A). A typical example of
a positive semi-definite matrix is the matrix A = HT H where H is an (m x n) matrix
with m < n. In an analogous way, a negative semi-definite matrix can be defined.
Given the bilinear form in (A.49), the gradient of the form with respect to x is
given by
gradxB(x,y) = (OB(X,y))T
ox = Ay, (A.55)
gradyB(x, y) = (oB(X,
oy y)) T = ATx. (A.56)
Given the quadratic form in (A.50) with A symmetric, the gradient of the form with
respect to x is given by
gradxQ(x) = (oQ(X))
ox T = 2Ax. (A.57)
. d T T'
Q(x) = dtQ(x(t)) = 2x Ax + x Ax; (A.58)
A.7 Pseudo-inverse
The inverse of a matrix can be defined only when the matrix is square and nonsingular.
The inverse operation can be extended to the case of non-square matrices. Given a
matrix A of dimensions (m x n) with g(A) =
min {m, n },if n < m, a left inverse
of A can be defined as the matrix Al of dimensions (n x m) so that
If instead n > m, a right inverse of A can be defined as the matrix A,. of dimensions
(n x m) so that
AAr = 1m.
If A has more rows than columns (m > n) and has rank n, a special left inverse is the
matrix
(A. 59)
(A.60)
348 Modelling and Control of Robot Manipulators
If A has more columns than rows (m < n) and has rank m, a special right inverse
is the matrix
(A.61)
(A.62)
(A.63)
x = Aty (A.64)
The scalars (T1 2': (T2 2': ... 2: (Tn 2: 0 are said to be the singular values of matrix A.
The singular value decomposition (SVD) of matrix A is given by
(A.65)
3 Subscripts 1 and r are usually omitted whenever the use of a left or right pseudo-inverse is
clear from the context.
Linear Algebra 349
V is an (n x n) orthogonal matrix
and E is an (m x n) matrix
(A.68)
where 0'1 2: 0'2 2: ... 2: O'r > O. The number of nonnull singular values is equal to
the rank '('of matrix A.
The columns of U are the eigenvectors of the matrix AA T, whereas the columns
of V are the eigenvectors of the matrix AT A. In view of the partitions of U and V
in (A.66) and (A.67), it is: AVi = O'iUi, for i 1, ... ,'(' and Av; = 0, for i =
'('+ 1, ... ,no
Singular value decomposition is useful for analysis of the linear transformation
y = Ax established in (A.34). According to a geometric interpretation, the matrix A
transforms the unit sphere in IRn defined by IIxll = 1 into the set of vectors y = Ax
which define an ellipsoid of dimension '(' in IR m. The singular values are the lengths
of the various axes of the ellipsoid. The condition number of the matrix
Bibliography
Boullion T.L., Odell P.L. (1971) Generalized Inverse Matrices. Wiley, New York.
DeRusso P.M., Roy R.J., Close C.M., A.A. Desrochers (1998) State Variables for Engineers.
2nd ed., Wiley, New York.
Gantmacher F.R. (1959) Theory of Matrices. Vols. I & II, Chelsea Publishing Company, New
York.
Golub G.H., Van Loan C.F. (1989) Matrix Computations. 2nd ed., The Johns Hopkins University
Press, Baltimore, Md.
Noble B. (1977) Applied Linear Algebra. 2nd ed., Prentice-Hall, Englewood Cliffs, N.J.