Book Mixed Model Henderson
Book Mixed Model Henderson
Book Mixed Model Henderson
in Animal Breeding
Charles R. Henderson
Chapter 1
Models
C. R. Henderson
1984 - Guelph
This book is concerned exclusively with the analysis of data arising from an experiment or sampling scheme for which a linear model is assumed to be a suitable approximation. We should not, however, be so naive as to believe that a linear model is always
correct. The important consideration is whether its use permits predictions to be accomplished accurately enough for our purposes. This chapter will deal with a general
formulation that encompasses all linear models that have been used in animal breeding
and related fields. Some suggestions for choosing a model will also be discussed.
All linear models can, I believe, be written as follows with proper definition of the
various elements of the model. Define the observable data vector with n elements as y.
In order for the problem to be amenable to a statistical analysis from which we can draw
inferences concerning the parameters of the model or can predict future observations it is
necessary that the data vector be regarded legitimately as a random sample from some
real or conceptual population with some known or assumed distribution. Because we
seldom know what the true distribution really is, a commonly used method is to assume
as an approximation to the truth that the distribution is multivariate normal. Analyses
based on this approximation often have remarkable power. See, for example, Cochran
(1937). The multivariate normal distribution is defined completely by its mean and by
its central second moments. Consequently we write a linear model for y with elements in
the model that determine these moments. This is
y = X + Zu + e.
X is a known, fixed, n p matrix with rank = r minimum of (n, p).
is a fixed, p 1 vector generally unknown, although in selection index methodology
it is assumed, probably always incorrectly, that it is known.
Z is a known, fixed, n q matrix.
u is a random, q 1 vector with null means.
e is a random, n 1 vector with null means.
Note that in conceptual repeated sampling the values of xi remain constant from one
sample to another, but in each sample a new set of ei is taken, and consequently the
values of yi change. Now relative to our general model,
y0 = (y 1 y 2 ... yn ),
0 = ( ),
"
#
1 1 ... 1
0
X =
, and
x1 x2 ... xn
e0 = (e1
e2 ... en )
Zu does not exist in the model. Usually R is assumed to be Ie2 in regression models.
Suppose we have a random sample of unrelated sires from some population of sires and
that these are mated to a sample of unrelated dams with one progeny per dam. The
resulting progeny are reared in a common environment, and one record is observed on
each. An appropriate model would seem to be
yij = + si + eij ,
yij being the observation on the j th progeny of the ith sire.
Suppose that there are 3 sires with progeny numbers 3, 2, l respectively. Then y is a
vector with 6 elements.
y0
x0
u0
e0
V ar(u)
V ar(e)
=
=
=
=
=
=
1 1/2 1/2
1 1/2 1/2 0
0
0
0
1
0
0 1/2 1/2
1/2 0
1 1/4 0
0
.
1/2 0 1/4 1
0
0
0 1/2 0
0
1 1/4
0 1/2 0
0 1/4 1
Suppose further that we invoke an additive genetic model with h2 = 1/4. Then
V ar(e) =
1
0
1/30 1/30
0
0
0
1
0
0
1/30 1/30
1/30
0
1
1/60
0
0
2.
e
1/30
0
1/60
1
0
0
0
1/30
0
0
1
1/60
0
1/30
0
0
1/60
1
Suppose that we have a random sample of 5 related animals with measurements on 2 correlated traits. We assume an additive genetic model. Let A be the numerator relationship
matrix of the 5 animals. Let
g11 g12
g12 g22
be the environmental variance-covariance matrix. Then h2 for trait 1 is g11 /(g11 +r11 ), and
the genetic correlation between the two traits is g12 /(g11 g22 )1/2 . Order the 10 observations,
animals within traits. That is, the first 5 elements of y are the observations on trait 1.
Suppose that traits 1 and 2 have common means 1 , 2 respectively. Then
0
X =
1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1
and
0 = (1 2 ).
The first 5 elements of u are breeding values for trait 1 and the last 5 are breeding
values for trait 2. Similarly the errors are partitioned into subvectors with 5 elements
each. Then Z = I and
A g11 A g12
A g12 A g22
G = V ar(u) =
I r11 I r12
I r12 I r22
R = Var (e) =
Suppose that we have a random sample of 3 unrelated sires and that they are mated to
unrelated dams. One progeny of each mating is obtained, and the resulting progeny are
assigned at random to two different treatments. The table of subclass numbers is
Sires
1
2
3
Treatments
1
2
2
1
0
2
3
0
Treatments are regarded as fixed, and variances of sires and errors are considered to
be unaffected by treatments. Then
u0 =
Z =
1
1
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
1
1
1
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
1
1
2
V ar(s) = I3 s2 , V ar(st) = I4 st
, V ar(e) = I8 e2 .
Cov(s, (st0 )) = 0.
This is certainly not the only linear model that could be invoked for this design. For
example, one might want to assume that sire and error variances are related to treatments.
Equivalent Models
It was stated above that a linear model must describe the mean and the variancecovariance matrix of y. Given these two, an infinity of models can be written all of
which yield the same first and second moments. These models are called linear equivalent
models.
Let one model be y = X + Zu + e with V ar(u) = G, V ar(e) = R. Let a second
model be y = X + Z u + e , with V ar(u ) = G , V ar(e ) = R . Then the means
of y under these 2 models are X and X respectively. V ar(y) under the 2 models is
0
ZGZ0 + R and Z G Z + R .
6
Consequently we state that these 2 models are linearly equivalent if and only if
0
X = X and ZGZ0 + R = Z G Z + R .
To illustrate, X = X suppose we have a treatment design with 3 treatments
and 2 observations on each. Suppose we write a model
yij = + ti + eij ,
then
1
1
1
1
1
1
X =
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
1
t1
.
t
2
t3
An alternative model is
yij = i + eij ,
then
X =
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
1
1
2
.
3
Then if we define i = + ti , it is seen that E(y) is the same in the two models.
To illustrate with two models that give the same V ar(y) consider a repeated lactation
model. Suppose we have 3 unrelated, random sample cows with 3, 2, 1 lactation records,
respectively. Invoking a simple repeatability model, that is, the correlation between any
pair of records on the same animal is r, one model ignoring the fixed effects is
yij = ci + eij .
c1
r 0 0
2
V ar(c) = V ar
c2 = 0 r 0 y .
c3
0 0 r
7
V ar(e) = I6 (1 r) y2 .
An alternative for the random part of the model is
yij = eij ,
where Zu does not exist.
1
r
r
0
0
0
V ar() = R =
r
1
r
0
0
0
r
r
1
0
0
0
0
0
0
1
r
0
0
0
0
r
1
0
0
0
0
0
0
1
y2 .
With some models it is convenient to write them as models for the smallest subclass
mean. By smallest we imply a subclass identified by all of the subscripts in the model
except for the individual observations. For this model to apply, the variance-covariance
matrix of elements of e pertaining to observations in the same smallest subclass must
have the form
c
..
no covariates exist, and the covariances between elements of e in different subclasses must
be zero. Then the model can be written
8
+ Zu
+ .
= X
y
and Z
relate these means to elements
is the vector of smallest subclass means. X
y
of and u. The error vector, , is the mean of elements of e in the same subclass. Its
variance-covariance matrix is diagonal with the ith diagonal element being
v
ni
ni 1
ni
e2 ,
random sample from some population of levels, the levels would be a subvector of u. With
respect to interactions, if one or more letters to the left of the colon represent a factor in
u, the interaction levels are subvectors of u. Thus interaction of fixed by random factors
is regarded as random, as is the nesting of random within fixed. As a final step we decide
the variance-covariance matrix of each subvector of u, the covariance between subvectors
of u, and the variance- covariance matrix of (u, e). These last decisions are based on
knowledge of the biology and the sampling scheme that produced the data vector.
It seems to me that modelling is the most important and most difficult aspect of
linear models applications. Given the model everything else is essentially computational.
10
Chapter 2
Linear Unbiased Estimation
C. R. Henderson
1984 - Guelph
Verifying Estimability
E(a0 y) = a0 X.
1 1 2
1 2 4
X =
.
1 1 2
1 3 6
Is 1 estimable, that is, (1 0 0) estimable? Let a0 = (2 1 0 0) then
a0 X = (1 0 0) = k0 .
Therefore, k0 is estimable.
Is (0 1 2) estimable? Let a0 = (1 1 0 0) then
a0 X = (0 1 2) = k0 .
Therefore, it is estimable.
Is 2 estimable? No, because no a0 exists such that a0 X = (0 1 0).
Generally it is easier to prove by the above method that an estimable function is indeed estimable than to prove that a non-estimable function is non-estimable. Accordingly,
we consider other methods for determining estimability.
1
1.1
Second Method
X1 L),
k = (k1
0
k1 L),
where k1 has r elements, and k1 L has p r elements. Consider the previous example.
X1 =
1
1
1
1
1
2
1
3
and L =
0
2
0
2
Is (1 0 0) estimable?
0
k1 = (1 0), k1 L = (1 0)
= 0.
k1 = (0 1), and k1 L = (0 1)
0
2
0
2
= 2.
k1 = (0 1), and k1 L = (0 1)
0
Thus (k1
1.2
= 2.
Third Method
1
1
1
1
1
2
1
3
2
4
2
6
2
1
0
0
0
0
(1 0 0) 2
0.
0.
So is (0 1 2) because
(0 1 2)
2
1
(0 1 0)
2
1
1.3
2 6= 0.
Fourth Method
4 7 14
X0 X = 7 15 30 ,
14 30 60
and a g-inverse is
15 7 0
1
4 0
7
.
11
0
0 0
3
Chapter 3
Best Linear Unbiased Estimation
C. R. Henderson
1984 - Guelph
0
k
This is a consistent set of equations if and only if k0 is estimable. In that case the
unique solution to a is
V1 X(X0 V1 X) k.
A solution to is
(X0 V1 X) k,
and this is not unique when X and consequently X0 V1 X is not full rank. Nevertheless
the solution to a is invariant to the choice of a g-inverse of X0 V1 X. Thus, BLUE of k0
is
k0 (X0 V1 X) X0 V1 y.
But let
o = (X0 V1 X) X0 V1 y,
1
X =
1
1
1
1
1
2
1
3
2
4
2
6
and y0 = (5 2 4 3). Suppose V ar(y) = Ie2 . Then the GLS equations are
4 7 14
2
e 7 15 30
14 30 60
14
o
= 22 e2 .
44
A solution is
( o )0 = (56 10 0)/11.
Then BLUE of (0 1 2), which has been shown to be estimable, is
(0 1 2)(56 10 0)0 /11 = 10/11.
Another solution to o is
(56 0 5)0 /11.
Then BLUE of (0 1 2) is 10/11, the same as the other solution to o .
One frequent difficulty with GLS equations, particularly in the mixed model, is that
V = ZGZ0 + R is large and non-diagonal. Consequently V1 is difficult or impossible to
compute by usual methods. It was proved by Henderson et al. (1959) that
V1 = R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 .
Now if R1 is easier to compute than V1 , as is often true, if G1 is easy to compute, and (Z0 R1 Z + G1 )1 is easy to compute, this way of computing V1 may have
important advantages. Note that this result can be obtained by writing equations, known
as Hendersons mixed model equations (1950) as follows,
X0 R1 X X0 R1 Z
Z0 R1 X Z0 R1 Z + G1
o
u
X0 R1 y
Z0 R1 y
X=
1
1
1
1
1
2
1
3
Z=
1
1
1
0
0
0
0
1
.1 0
0 .1
G=
and
R = I, y0 = [5 4 3 2].
Then the mixed model equations are
4 7 3 1
7 15 4 3
3 4 13 0
1 3 0 11
o
u
14
22
12
2
The solution is [286 50 2 2]/57. In this case the solution is unique because X has
full column rank.
Now consider a GLS solution.
V = [ZGZ0 + R] =
V1 =
143
1.1 .1 .1
0
.1 1.1 .1
0
.1 .1 1.1
0
0
0
0 1.1
132 11 11
0
11 132 11
0
11 11 132
0
0
0
0 130
Then X0 V1 X o = X0 V1 y becomes
1
143
460 830
830 1852
1
=
143
o
1580
2540
Variance of BLUE
Once having an estimate of k0 we should like to know its sampling variance. Consider a
set of estimators, K0 o .
Then
V ar(K0 o ) = K0 C11 K.
This result can be proved by noting that
C11 = (X0 [R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ]X)
= (X0 V1 X) .
Using the mixed model example, let
1 0
0 1
K =
.
86
25
56
1
29 25
1
56
570
Then
1
V ar(K ) =
570
0
926 415
415
230
The same result can be obtained from the inverse of the GLS coefficient matrix
because
! !1
!
1
460 830
926 415
1
143
=
.
830 1852
230
570 415
4
3.1
Let W be a symmetric matrix with order, s, and rank, t < s. Partition W with possible
re-ordering of rows (and the same re-ordering of columns) as
W=
W11 W12
0
W12 W22
1
W11
0
0
0
4 7 8 15
7 15 17 32
W=
.
8 17 22 39
15 32 39 71
This matrix has rank 3 and the upper 3 3 is non-singular with inverse
41 18 1
24 12
301 18
.
1 12
11
5
Therefore a g-inverse is
301
41 18 1 0
18
24 12 0
1 12
11 0
0
0
0 0
301
41 17 0 1
17
59 0 23
.
0
0 0
0
1 23 0
11
This was obtained by inverting the full rank submatrix composed of rows (and columns)
1, 2, 4 of W. This type of g-inverse is described in Searle (1971b).
In the mixed model equations a comparable g-inverse is obtained as follows. Partition
X R1 X with possible re-ordering of rows (and columns) as
0
X1 R1 X1 X1 R1 X2
0
0
X2 R1 X1 X2 R1 X2
X1 R1 X1 X1 R1 Z
Z0 R1 X1 Z0 R1 Z + G1
!1
C00 C02
0
C02 C22
C00 0 C02
5
8 8
3
2
8
16 16
4
4
8 16
16 4 4
3
4 4
8
0
2
4 4
0
7
1
560
656 300 0 96 16
300
185 0
20 20
0
0 0
0
0
.
96
20 0
96
16
16 20 0
16
96
6
With this type of g-inverse the solution to o is ( o1 0)0 , where o1 has r elements. Only
the first p rows of the mixed model equations contribute to lack of rank of the mixed
model matrix. The matrix has order p + q and rank r + q, where r = rank of X, p =
columns in X, and q = columns in Z.
3.2
A second type of g-inverse is one which imposes restrictions on the solution to o . Let
M0 be a set of p r linearly independent, non-estimable functions
of . Then a!g-inverse
!1
0 1
C11 C12
XV X M
.
for the GLS matrix is obtained as follows
=
0
0
M
O
C12 C22
C11 is a reflexive g-inverse of X0 V1 X. This type of solution is described in Kempthorne
(1952). Let us illustrate GLS equations as follows.
11
5
6
3
8
5
5
0
2
3
6
0
6
1
5
3
2
1
3
0
8
3
5
0
8
o =
12
7
5
8
4
This matrix has order 5 but rank only 3. Two independent non-estimable functions are
needed. Among others the following qualify
0 1 1 0 0
0 0 0 1 1
5
5
0
2
3
1
0
0
0
0
1
1
0
0
Therefore we invert
11
5
6
3
8
0
0
6
0
6
1
5
1
0
3
2
1
3
0
0
1
8
3
5
0
8
0
1
0
1
1
0
0
0
0
which is
2441
28 1
1
13 13 122 122
1
24 24 7
7
122
0
1 24
24
7 7
122
0
13 7
7
30 30
0
122
.
13
7 7 30
30
0
122
122
0
0 122 122
0
0
7
X0 R1 X X0 R1 Z
M
0 1
0 1
1
0
ZR X ZR Z+G
0
M
0
0
Then
C11 C12
0
C12 C22
is a g-inverse of the mixed model coefficient matrix. The property of o coming from this
type of g-inverse is
M0 o = 0.
3.3
M =
0 1 1 0 0
0 0 0 1 1
as before.
(X0 V1 X + MM0 ) =
with inverse
1
244
11
5
6
3
8
5
6
1
2
3
6
1
7
1
5
3
2
1
4
1
8
3
5
1
9
150 62 60 48 74
62
85
37 7
7
60
37
85
7 7
,
48 7
7
91
31
74
7 7
31
91
which is a g-inverse of the GLS matrix. The resulting solution to o is the same as the
previous section.
Reparameterization
An entirely different method for dealing with the not full rank X problem is reparameter be
ization. Let K0 be a set of r linearly independent, estimable functions of . Let
solve (KK)1 K0 X0 V1 XK(K0 K)1
= (K0 K)1 K0 X0 V1 y.
BLUE of K0 . To find
has a unique solution, and the regular inverse of the coefficient matrix is V ar().
This
corresponds to a model
E(y) = X K(K0 K)1 .
This method was suggested to me by Gianola (1980).
From the immediately preceding example we need 3 estimable functions. An independent set is
1 1
0
0
0
.
0
0
0
1 1
The corresponding GLS equations are
12
11 .50 2.50
=
.75
1 .
.5 2.75
2
2.5
.75
2.75
The solution is
0 = (193 8 262)/122.
This is identical to
o
1 1
0
0
0
0
0
0
1 1
from the previous solution in which
0 1 1 0 0
0 0 0 1 1
was forced to equal 0.
(K0 K)1 K0 X0 R1 y
Z0 R1 y
10
Chapter 4
Test of Hypotheses
C. R. Henderson
1984 - Guelph
Much of the statistical literature for many years dealt primarily with tests of hypotheses ( or tests of significance). More recently increased emphasis has been placed, properly
I think, on estimation and prediction. Nevertheless, many research workers and certainly
most editors of scientific journals insist on tests of significance. Most tests involving linear
models can be stated as follows. We wish to test the null hypothesis,
H0 = c 0 ,
against some alternative hypothesis, most commonly the alternative that can have any
value in the parameter space. Another possibility is the general alternative hypothesis,
Ha = c a .
In both of these hypotheses there may be elements of that are not determined
0
by H. These elements are assumed to have any values in the parameter space. H0 and
0
Ha are assumed to have full row rank with m and a rows respectively. Also r m > a.
Under the unrestricted hypothesis a = 0.
0
Two important restrictions are required logically for H0 and Ha . First, both H0
0
and Ha must be estimable. It hardly seems logical that we could test hypotheses about
functions of unless we can estimate these functions. Second, the null hypothesis must
be contained in the alternative hypothesis. That is, if the null is true, the alternative
0
0
must be true. For this to be so we require that Ha can be written as MH0 and ca as Mc0
for some M.
Equivalent Hypotheses
It should be recognized that there are an infinity of hypotheses that are equivalent to
0
0
H0 = c. Let P be an m m, non-singular matrix. Then PH0 = Pc is equivalent to
1
i = 1, 2, 3.
t = 0.
An equivalent hypothesis is
2/3 1/3 1/3
1/3
2/3 1/3
t = 0.
2/3 1/3
1/3
2/3
by
0
0
0
0
0
1
0
0
0
0
0 1 0 0 0
0
1 1 0 0 0
0
0
0 1 0 0 1
0
0 0 1 0 1
0
0 0 0 1 1
= 0.
0 0 0 0 1 0 0 1
0 0 0 0 0 1 0 1 = 0.
0 0 0 0 0 0 1 1
The second sum of squares represents testing the null hypothesis:
0 0 0 0 1 0 0 1
0 0 0 0 0 1 0 1 = 0.
0 0 0 0 0 0 1 1
and the alternative hypothesis: entire parameter space.
2
2
2.1
Test Criteria
Differences between residuals
Now it is assumed for purposes of testing hypotheses that y has a multivariate normal
distribution. Then it can be proved by the likelihood ratio method of testing hypotheses,
Neyman and Pearson (1933), that under the null hypothesis the following quantity is
distributed as 2 .
(y X 0 )0 V1 (y X 0 ) (y X a )0 V1 (y X a ).
(1)
0 1
0 1
1
0 u0 = Z R y .
ZR X ZR Z+G
0
c0
0
H0
0
0
y = + ti + eij
, ti fixed, i = 1, 2, 3
R = V ar(e) = 5I.
Suppose that the number of observations on the levels of ti are 4, 3, 2, and the
treatment totals are 25, 15, 9 with individual observations, (6, 7, 8, 4, 4, 5, 6, 5, 4). We
wish to test that the levels of ti are equal, which can be expressed as
0 1 0 1
0 0 1 1
( t1 t2 t3 )0 = (0 0)0 .
3
9 4
4 4
3 0
.2
2 0
0 1
0 0
A solution is
3
2
0
0
0
0
1
0
3
0
0
1
0
2 1 1
0 1
0
0
1 1
0
0
0
0
= .2
49
25
15
9
0
0
.2
9
4
3
2
4
4
0
0
3
0
3
0
2
0
0
2
= .2
49
25
15
9
A solution is a = (0 25 20 18)/4.
(y X o )0
(y X o )0 V1 (y X o )
(y X a )0
(y X a )0 V1 (y X a )
The difference is
2.2
146
45
9
4
=
=
=
=
=
(5 14 23 13 13 4 5 4 13)/9.
146/45.
[1, 3, 7, 9, 4, 0, 4, 2, 2]/4.
9/4.
179
.
180
Two easier methods of computation that lead to the same result will now be presented.
The first, described in Searle (1971b), is
0
a X0 V1 y + a ca o X0 V1 y o co .
(2)
The first 2 terms are called reduction in sums of squares under the alternative hypothesis.
The last two terms are the negative of the reduction in sum of squares under the null
hypothesis. In our example
0
a X0 V1 y + a ca = 1087/20.
0
0
o X0 V1 y + o co = 2401/45.
1087
2401
179
=
as before.
20
45
180
4
a X0 R1 y + ua Z0 R1 y + a ca o X0 R1 y uo Z0 R1 y o co .
2.3
(3)
(4)
If Ha is unrestricted the second term of (4) is set to 0. Remember that o is a solution in the unrestricted GLS equations. In place of (X0 V1 X) one can substitute the
corresponding submatrix of a g-inverse of the mixed model coefficient matrix.
This is a convenient point to prove that an equivalent hypothesis, P(H0 c) = 0
gives the same result as H0 c, remembering that P is non-singular. The quantity
corresponding to (4) for P (H0 c) is
(H0 o c)0 P0 [PH0 (X0 V1 X) HP0 ]1 P(H0 c)
= (H0 o c)0 P0 (P0 )1 [H0 (X0 V1 X) H]1 P1 P(H0 o c)
= (H0 o c)0 [H0 (X0 V1 X)H]1 (H0 o c),
which proves the equality of the two equivalent hypotheses.
Let us illustrate (3) with our example
0 1 0 1
0 0 1 1
0 1 0 1
0 0 1 1
!
0
(0 25 20 18) /4 =
A g-inverse of X0 V1 X is
0 0 0 0
0 15 0 0
0 0 20 0
0 0 0 30
H0 (X V X) H0 =
/12.
45 30
30 50
/12.
7
2
/4.
20 12
12
18
1
45
/45.
7
2
1
179
=
as before.
4
180
The d.f. for 2 are 2 because H0 has 2 rows and the alternative hypothesis is unrestricted.
2.4
(X1 V1 X1 ) o1 = X1 V1 y
and then
Reduction = ( o1 )0 X1 V1 y.
(5)
X1 V1 X1 X1 V1 X2 0
o1
X1 V1 y
0
0
0
o
X2 V1 X1 X2 V1 X2 H 2 = X2 V1 y .
0
H0
0
0
Then
Reduction = ( o1 )0 X1 V1 y + ( o2 )0 X2 V1 y.
(6)
(7)
o1 = (X1 V1 X1 ) X1 V1 y.
Consequently in order to determine what hypothesis is implied when 2 is deleted
from the model, we need to find some H 0 2 = 0 such that a solution to (6) is o2 = 0.
We illustrate with a two way fixed model with interaction. The numbers of observations per subclass are
!
3 2 1
.
1 2 5
6
14 6 8 4
6 0 3
8 1
4
2
2
0
4
6
1
5
0
0
6
3
3
0
3
0
0
3
2
2
0
0
2
0
0
2
1
1
0
0
0
1
0
0
1
1
0
1
1
0
0
0
0
0
1
2
0
2
0
2
0
0
0
0
0
2
5
0
5
0
0
5
0
0
0
0
0
5
o =
27
10
17
9
7
11
6
2
2
3
5
9
rc = 0.
3 3 1 1
1 1 1 1
0
0
1
0
1
1
0
1
0
0 0 1 1
0 1
1
r
rc
= 0.
Chapter 5
Prediction of Random Variables
C. R. Henderson
1984 - Guelph
Best Prediction
Let w = f (y) be a predictor of the random variable w. Find f (y) such that E(w w)2
is minimum. Cochran (1951) proved that
f (y) = E(w | y).
(1)
This requires knowing the joint distribution of w and y, being able to derive the conditional mean, and knowing the values of parameters appearing in the conditional mean.
All of these requirements are seldom possible in practice.
Cochran also proved in his 1951 paper the following important result concerning selection. Let p individuals regarded as a random sample from some population as candidates
for selection. The realized values of these individuals are w1 , . . . wp , not observable. We
can observe yi , a vector of records on each. (wi , yi ) are jointly distributed as f (w, y) independent of (wj , yj ). Some function, say f (yi ), is to be used as a selection criterion and the
fraction, , with highest f (yi ) is to be selected. What f will maximize the expectation
1
of the mean of the associated wi ? Cochran proved that E(w | y) accomplishes this goal.
This is a very important result, but note that seldom if ever do the requirements of this
theorem hold in animal breeding. Two obvious deficiencies suggest themselves. First, the
candidates for selection have differing amounts of information (number of elements in y
differ). Second, candidates are related and consequently the yi are not independent and
neither are the wi .
Properties of best predictor
1.
2.
3.
E(wi ) = E(wi ).
V ar(wi wi ) = V ar(w | y)
averaged over the distribution of y.
Maximizes rww
for all functions of y.
(2)
(3)
(4)
Because we seldom know the form of distribution of (y, w), consider a linear predictor
that minimizes the squared prediction error. Find w
= a0 y + b, where a0 is a vector and
b a scalar such that E(w w)2 is minimum. Note that in contrast to BP the form of
distribution of (y, w) is not required. We shall see that the first and second moments are
needed.
Let
E(w)
E(y)
Cov(y, w)
V ar(y)
=
=
=
=
,
,
c, and
V.
Then
E(a0 y + b w)2 = a0 Va 2a0 c + a0 0 a + b2
+ 2a0 b 2a0 2b + V ar(w) + 2 .
Differentiating this with respect to a and b and equating to 0
V + 0
0
1
a
b
c +
The solution is
a = V1 c, b = 0 V1 c.
2
(5)
Thus
w = + c0 V1 (y ).
Note that this is E(w | y) when y, w are jointly normally distributed. Note also that BLP
is the selection index of genetics. Sewall Wright (1931) and J.L. Lush (1931) were using
this selection criterion prior to the invention of selection index by Fairfield Smith (1936).
I think they were invoking the conditional mean under normality, but they were not too
clear in this regard.
Other properties of BLP are unbiased, that is
E(w)
= E(w).
(6)
E(w)
= E[ + c0 V1 (y )]
= + c0 V1 ( )
= = E(w).
V ar(w)
= V ar(c0 V1 y) = c0 V1 V V1 c = c0 V1 c.
(7)
Cov(w,
w) = c0 V1 Cov(y, w) = c0 V1 c = V ar(w)
(8)
(9)
(10)
Maximize log r.
log r = log a0 c .5 log [a0 Va] .5 log V ar(w).
Differentiating with respect to a and equating to 0.
c
V ar(w)
Va
= 0 or Va = c
.
0
a Va
ac
Cov(w,
w)
The ratio on the right does not affect r. Consequently let it be one. Then a = V1 c.
Also the constant, b, does not affect the correlation. Consequently, BLP maximizes r.
where w
is BLP of w. Now w is a vector with E(w) = and
BLP of m0 w is m0 w,
Cov(y, w0 ) = C. Substitute the scalar, m0 w for w in the statement for BLP. Then BLP
of
m0 w = m0 + m0 C0 V1 (y )
= m0 [ + CV1 (y )]
= m0 w
because
= + C0 V1 (y ).
w
3
(11)
In the multivariate normal case, BLP maximizes the probability of selecting the better
of two candidates for selection, Henderson (1963). For fixed number selected, it maximizes
the expectation of the mean of the selected ui , Bulmer (1980).
It should be noted that when the distribution of (y, w) is multivariate normal, BLP
is the mean of w given y, that is, the conditional mean, and consequently is BP with its
desirable properties as a selection criterion. Unfortunately, however, we probably never
know the mean of y, which is X in our mixed model. We may, however, know V
accurately enough to assume that our estimate is the parameter value. This leads to the
derivation of best linear unbiased prediction (BLUP).
Suppose the predictand is the random variable, w, and all we know about it is that it has
mean k0 , variance = v, and its covariance with y0 is c0 . How should we predict w? One
possibility is to find some linear function of y that has expectation, k0 (is unbiased), and
in the class of such predictors has minimum variance of prediction errors. This method
is called best linear unbiased prediction (BLUP).
Let the predictor be a0 y. The expectation of a0 y = a0 X, and we want to choose a
so that the expectation of a0 y is k0 . In order for this to be true for any value of , it is
seen that a0 must be chosen so that
a0 X = k0 .
(12)
(13)
c
k
(14)
Note the similarity to (1) in Chapter 3, the equations for finding BLUE of k0 .
Solving for a in the first equation of (14),
a = V1 X + V1 c.
Substituting this value of a in the second equation of (14)
X0 V1 X = k + X0 V1 c.
4
(15)
Then, if the equations are consistent, and this will be true if and only if k0 is estimable,
a solution to is
= (X0 V1 X) k + (X0 V1 X) X0 V1 c.
Substituting the solution to in (15) we find
a = V1 X(X0 V1 X) k V1 X(X0 V1 X) X0 V1 c + V1 c.
(16)
(17)
(18)
This result was described by Henderson (1963) and a similar result by Goldberger (1962).
Note that if k0 = 0 and if is known, the predictor would be c0 V1 (y X).
This is the usual selection index method for predicting w. Thus BLUP is BLP with o
substituted for .
4
4.1
We want to predict m0 w in the situation with unknown . But BLP, the minimum MSE
predictor in the class of linear functions of y, involves . Is there a comparable predictor
that is invariant to ?
Let the predictor be
a0 y + b,
invariant to the value of . For translation invariance we require
a0 y + b = a0 (y + Xk) + b
for any value of k. This will be true if and only if a0 X = 0. We minimize
E(a0 y + b m0 w)2 = a0 Va 2a0 Cm + b2 + m0 Gm
when a0 X = 0 and where G = V ar(w). Clearly b must equal 0 because b2 is positive. Minimization of a0 Va 2a0 Cm subject to a0 X = 0 leads immediately to predictor
m0 C0 V1 (y X o ), the BLUP predictor. Under normality BLUP has, in the class of
invariant predictors, the same properties as those stated for BLP.
5
4.2
(19)
Cov(y , w0 ) = T0 C C ,
(20)
and
= C V y .
w
0
= Cov(w,
w0 ) = C V C .
V ar(w)
w) = V ar(w) V ar(w).
V ar(w
(21)
(22)
(23)
(24)
(25)
We now state some useful variances and covariances. Let a vector of predictands be w.
Let the variance-covariance matrix of the vector be G and its covariance with y be C0 .
Then the predictor of w is
= K0 o + C0 V1 (y X o ).
w
w0 ) = K0 (X0 V1 X) X0 V1 C + C0 V1 C
Cov(w,
C0 V1 X(X0 V1 X) X0 V1 C.
6
(26)
(27)
= K0 (X0 V1 X) K + C0 V1 C
V ar(w)
C0 V1 X(X0 V1 X) X0 V1 C.
(28)
w) = V ar(w) Cov(w,
w0 ) Cov(w, w
0 ) + V ar(w)
V ar(w
0
0 1
0
0 1
0 1
= K (X V X) K K (X V X) X V C
C0 V1 X(X0 V1 X) K + G C0 V1 C
+ C0 V1 X(X0 V1 X) X0 V1 C.
(29)
The mixed model equations, (4) of Chapter 3, often provide an easy method to compute
BLUP. Suppose the predictand, w, can be written as
w = K0 + u,
(30)
where u are the variables of the mixed model. Then it can be proved that
,
BLUP of w = BLUP of K0 + u = K0 o + u
(31)
are solutions to the mixed model equations. From the second equation
where and u
of the mixed model equations,
= (Z0 R1 Z + G1 )1 Z0 R1 (y X o ).
u
But it can be proved that
(Z0 R1 Z + G1 )1 Z0 R1 = C0 V1 ,
where C = ZG, and V = ZGZ0 + R. Also o is a GLS solution. Consequently,
.
K0 o + C0 V1 (y X o ) = K0 o + u
From (24) it can be seen that
.
BLUP of u = u
(32)
C0 V1 =
=
=
=
This result was presented by Henderson (1963). The mixed model method of estimation and prediction can be formulated as Bayesian estimation, Dempfle (1977). This is
discussed in Chapter 9.
7
A g-inverse of the coefficient matrix of the mixed model equations can be used to find
needed variances and covariances. Let a g-inverse of the matrix of the mixed model
equations be
C11 C12
0
C12 C22
(33)
Then
V ar(K0 o )
0)
Cov(K0 o , u
Cov(K0 o , u0 )
0 u0 )
Cov(K0 o , u
V ar(
u)
Cov(
u, u0 )
V ar(
u u)
w)
V ar(w
=
=
=
=
=
=
=
=
K0 C11 K.
0.
K0 C12 .
K0 C12 .
G C22 .
G C22 .
C22 .
0
K0 C11 K + K0 C12 + C12 K + C22 .
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
Prediction Of Errors
The prediction of errors (estimation of the realized values) is simple. First, consider the
model y = X + and the prediction of the entire error vector, . From (18)
= C0 V1 (y X o ),
= y X o .
(42)
To predict n+1 , not in the model for y, we need to know its covariance with y.
Suppose this is c0 . Then
n+1 = c0 V1 (y X o )
.
= c0 V1
(43)
Next consider prediction of e from the mixed model. Now Cov(e, y0 ) = R. Then
= RV1 (y X o )
e
= R[R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ](y X o ),
from the result on V1 ,
= [I Z(Z0 R1 Z + G1 )1 Z0 R1 ](y X o )
= y X o Z(Z0 R1 Z + G1 )1 Z0 R1 (y X o )
= y X o Z
u.
(44)
To predict en+1 , not in the model for y, we need the covariance between it and e, say
c . Then the predictor is
0
.
en+1 = c0 R1 e
0
(45)
Rpp Rpm
0
Rpm Rmm
(46)
Then
p = y X o Z
e
u,
and
0
p .
m = Rpm R1
e
pp e
Some prediction error variances and covariances follow.
V ar(
ep ep ) = WCW0 ,
where
C1
C2
W = [X Z], C =
where C is the inverse of mixed model coefficient matrix, and C1 , C2 have p,q rows
respectively. Additionally,
0
Cov[(
ep ep ), ( o )0 K] = WC1 K,
0
Cov[(
ep ep ), (
u u)0 ] = WC2 ,
Cov[(
ep ep ), (
em em )0 ] = WCW0 R1
pp Rpm ,
0
0 1
V ar(
em em ) = Rmm Rpm R1
pp WCW Rpp Rpm ,
0
Cov[(
em em ), ( o )0 K] = Rpm R1
pp WC1 K, and
Cov[(
em em ), (
u u)0 ] = Rpm R1
pp WC2 .
9
Prediction Of Missing u
Three simple methods exist for prediction of a u vector not in the model, say un .
n = B0 V1 (y X o )
u
(47)
(48)
0 1
0 1
= Z R y ,
Z R X Z R Z + W11 W12 u
0
n
u
0
0
W12
W22
where
W11 W12
0
W12 W22
G C
C0 Gn
(49)
!1
10
The possibility exists that G is singular. This could be true in an additive genetic model
with one or more pairs of identical twins. This poses no problem if one uses the method
= GZ0 V1 (y X 0 ), but the mixed model method previously described cannot be
u
used since G1 is required. A modification of the mixed model equations does permit a
. One possibility is to solve the following.
solution to o and u
X0 R1 X
X0 R1 Z
GZ0 R1 X GZ0 R1 Z + I
X0 R1 y
GZ0 R1 y
(50)
is BLUP
The coefficient matrix has rank, r + q. Then o is a GLS solution to , and u
of u. Note that the coefficient matrix above is not symmetric. Further, a g-inverse of
it does not yield sampling variances. For this we proceed as follows. Compute C, some
g-inverse of the matrix. Then
!
I 0
C
0 G
has the same properties as the g-inverse in (33).
10
X0 R1 y
GZ0 R1 y
(51)
Then
This coefficient matrix has rank, r+ rank (G). Solve for o , .
= G.
u
Let C be a g-inverse of the matrix of (51). Then
I 0
0 G
I 0
0 G
11
1 1 1 1 1
1 2 1 3 4
1 1 0 0 0
0
, Z = 0 0 1 0 0 ,
0 0 0 1 1
3 2 1
0
4 1
G =
, R = 9I, y = (5, 3, 6, 7, 5).
5
By the basic GLS and BLUP methods
V = ZGZ0 + R =
12
3
12
2
2
13
1
1
1
5
14
1.280757
2.627792
1
1
1
14
o =
11
y X o =
.2839
2.1525
.7161
1.9788
.1102
GZ0 V1
.3220
o
0 1
= GZ V (y X ) = .0297
u
,
.4915
0 u0 ) = (X0 V1 X) X0 V1 ZG
Cov( o , u
!
3.1377 3.5333
.4470
=
, and
.5053
.6936 1.3633
V ar(
u u) = G GZ0 V1 ZG + GZ0 V1 X(X0 V1 X) X0 V1 ZG
1.5274 .7445
4 1
=
2.6719
5
1.1973 1.2432
1.3063
+
.9182
.7943
2.3541
3.7789 1.0498
=
.
4.6822
The mixed model method is considerably easier.
0
.5556 1.2222
1.2222 3.4444
XR X =
XR Z =
Z0 R1 Z =
.2222
0
.1111
12
0
0
,
.2222
2.8889
6.4444
X0 R1 y =
G1
.8889
0 1
, Z R y = .6667 ,
1.3333
.
.2162
3.4444
.3333
.1111
.7778
.4895 .0270
.4384
2.8889
6.4444
.8889
.6667
1.3333
2.0712
.5053
.6936 1.3633
2.8517
2.0794
1.1737
3.7789
1.0498
4.6822
0 u0 ),
The upper 2 x 2 represents (X0 V1 X) , the upper 2 x 3 represents Cov( o , u
and the lower 3 x 3 V ar(
u u). These are the same results as before. The solution is
(5.4153, .1314, .3220, .0297, .4915) as before.
Now let us illustrate with singular G. Let the data be the same as before except
2 1 3
3 4
G=
.
7
Note that the 3rd row of G is the sum of the first 2 rows. Now
V =
and
11
2
11
V1
1
1
12
3
3
4
16
3
3
4
7
16
.0943 .0165
=
.0832
13
.0115
.0115
.0165
.0280
.0832
.3670
1.2409
(X V
X)
1.0803
1.7749
8.7155 2.5779
1.5684
4.8397
.0011
o =
.1065
.1065 .0032 .1032 .1032
=
.0032
.0032
.1516 .1484 .1484
u
.1033
.1033
.1484 .2516 .2516
.1614
1.8375
1.1614
2.1636
.1648
0582
= .5270 .
.5852
u) =
Cov( , u
2.5628 3.6100
V ar(
u u) =
.
6.5883
By the modified mixed model methods
1.2222 3.1111
0 1
GZ R X = 1.4444 3.7778 ,
2.6667 6.8889
0 1
GZ R Z = .2222 .3333 .8889 ,
.6667 .4444 1.5556
6.4444
0 1
GZ R y = 8.2222 , X0 R1 y =
14.6667
2.8889
6.4444
.5556
1.2222
1.2222
1.4444
2.6667
2.8889
6.4444
6.4444
8.2222
14.6667
The solution is (4.8397, .0011, .0582, .5270, .5852) as before. The inverse of the
coefficient matrix is
2.5779
1.5684
.1737
.1913 .3650
.9491 .5563
.9673
.0509 .0182
.
.8081 .7124
.1843
.8842 .0685
1.7572 1.2688
.1516 .0649
.9133
I 0
0 G
gives
1.9309 1.0473
2.9782
2.5628
3.6101
6.5883
These yield the same variances and covariances as before. The analogous symmetric
equations (51) are
5.0
4.4444 9.4444
7.7778 12.2222
21.6667
A solution is [4.8397, .0011, .2697,
0 = (.0582, .5270, .5852) as before.
u
2.8889
6.4444
6.4444
8.2222
14.6667
by G we obtain
0, .1992]. Premultiplying
1.1530 0 .4632
0
0
.3197
!
I 0
Pre-and post-multiplying this matrix by
, yields the same matrix as post0 G
!
I 0
multiplying the non-symmetric inverse by
and consequently we have the
0 G
required matrix for variances and covariances.
15
12
We illustrate prediction of random variables not in the model for y by a multiple trait
example. Suppose we have 2 traits and 2 animals, the first 2 with measurements on traits
1 and 2, but the third with a record only on trait 1. We assume an additive genetic model
and wish to predict breeding value of both traits on all 3 animals and also to predict the
second trait of animal 3. The numerator relationship matrix for the 3 animals is
1 1/2 1/2
1 1/4
1/2
.
1/2 1/4 1
The additive genetic variance-covariance
and
!
! error covariance matrices are assumed
2 2
4 1
to be G0 and R0 =
and
, respectively. The records are ordered
2 3
1 5
animals in traits and are [6, 8, 7, 9, 5]. Assume
1 1 1 0 0
0 0 0 1 1
X =
Z=
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
If the last (missing u6 ) is not included delete the last column from Z. When all u are
included
!
Ag11 Ag12
G=
,
Ag12 Ag22
where gij is the ij th element of G0 , the genetic variance-covariance matrix. Numerically
this is
2 1 1 2 1
1
2 .5 1 2 .5
2
1
.5
2
3 1.5 1.5
3 .75
3
If u6 is not included, delete the 6th row and column from G.
4 0 0 1
4 0 0
4 0
R=
16
0
1
0
0
5
R1 =
.2632
0 .0526
0
0
0
.0526
.
.25
0
0
.2105
0
.2105
0
.2632
2.
0
.6667 1.3333
.6667
0
0
1.3333 .6667
1.3333
Then the mixed model equations for o and u1 , . . . , u5 are
.7763 .1053
.2632 .2632
.25
.0526 .0526
1
.4211 .0526 .0526
0
.2105
.2105
2.4298
1.
.3333 1.3860 .6667
1
2.2632
0
.6667 1.3860 u2
u
9167
0
0
1.5439 .6667
4
u
1.5439
u5
u6
2 1 1 2 1
2 .5 1 2
2 1 .5
= [1 .5 2 1.5 .75]
3 1.5
3
= .1276.
u1
u2
u3
u4
u5
.7763 .1053
.2632 .2632 .25
.0526 .0526
0
2.7632
1.
1. 1.7193 .6667
.6667
2.2632
0
.6667 1.3860
0
2.25
.6667
0
1.3333
1.8772 .6667
.6667
1.5439
0
1.3333
17
The solution is (6.9909, 6.9959, .0545, -.0495, .0223, .2651, -.2601, .1276), and equals the
previous solution.
The predictor of the record on the second trait on animal 3 is some new 2 + u6 + e6 .
We already have u6 . We can predict e6 from e1 . . . e5 .
1
2
u1
u2
u3
u4
u5
e1
e2
e3
e4
e5
y1
y2
y3
y4
y5
1
1
1
0
0
0
0
0
1
1
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
1.0454
1.0586
.0132
1.7391
1.7358
Then e6 = (0 0 1 0 0) R1 (
e1 . . . e5 )0 = .0033. The column vector above is Cov [e6 , (e1 e2 e3 e4 e4 e5 )].
0
R above is V ar[(e1 . . . e5 ) ].
Suppose we had the same model as before but we have no data on the second trait.
We want to predict breeding values for both traits in the 3 animals, that is, u1 , . . . , u6 .
We also want to predict records on the second trait, that is, u4 + e4 , u5 + e5 , u6 + e6 . The
mixed model equations are
.75
.25
2.75
.25
1.
2.25
.25
0
0
0
1. 1.6667
.6667
.6667
0
.6667 1.3333
0
2.25
.6667
0 1.3333
1.6667 .6667 .6667
1.3333
0
1.3333
u1
u2
u3
u4
u5
u6
The solution is
[7.0345, .2069, .1881, .0846, .2069, .1881, .0846].
The last 6 values represent prediction of breeding values.
e1
y1
2 = y2 (X Z)
e
e3
y3
Then
u1
u2
u3
e4
1 0 0
4 0 0
5
e
= 0 1 0 0 4 0
e6
0 0 1
0 0 4
18
.8276
= .7774 .
.0502
e1
.2069
2
e
= .1944 .
e3
.0125
5.25
1.50
2.00
1.75
0
0
0
.2069
.2069
2 + .1881 + .1944 ,
.0846
.0125
but 2 is unknown.
13
A Singular Submatrix In G
G =
0 1
1
1
Z1 R Z1 + G11 Z1 R1 Z2
Z1 R X
0
0
0
1
1
1
G22 Z2 R Z2 + I
G22 Z2 R X G22 Z2 R Z1
X0 R1 y
o
0 1
1 = Z1 R y
u
.
0
1
2
u
G22 Z2 R y
(52)
Let a g-inverse of this matrix be C. Then the prediction errors come from
I 0 0
C 0 I 0 .
0 0 G22
(53)
Z1 R1 Z1 + G1
Z1 R1 Z2 G22
Z1 R1 X
11
0
0
0
1
1
1
G22 Z2 R Z2 G22 + G22
G22 Z2 R X G22 Z2 R Z1
X0 R1 y
o
1
u
= Z1 R1 y
,
0
1
2
G22 Z2 R y
(54)
2 = G22
2.
and u
Let C be a g-inverse of the coefficient matrix
ances come from
I 0 0
I
0 I 0
C 0
0 0 G22
0
19
0 0
I 0
.
0 G22
(55)
14
yi = xi + zi u + ei .
0
(56)
and ei , BLUP
Then if we have available BLUE of xi = xi o and BLUP of u and ei , u
of this future record is
0
0
+ ei .
xi o + zi u
Suppose however that we have information on only a subvector of say 2 . Write
the model for a future record as
0
x1i 1 + x2i 2 + zi u + ei .
Then we can assert BLUP for only
0
x2i 2 + z02 u + ei .
But if we have some other record we wish to compare with this one, say yj , with
model,
0
0
0
yj = x1j 1 + x2j 2 + zj u + ej ,
we can compute BLUP of yi yj provided that
x1i = x1j .
It should be remembered that the variance of the error of prediction of a future record
(or linear function of a set of records) should take into account the variance of the error
of prediction of the error (or linear combination of errors) and also its covariance with
. See Section 8 for these variances and covariances. An extensive discussion of
o and u
prediction of future records is in Henderson (1977b).
15
In some genetic problems, and in particular individual animal multiple trait models, the
order of the mixed model coefficient matrix can be much greater than n, the number
of observations. In these cases one might wish to consider a method described in this
20
section, especially if one can thereby store and invert the coefficient matrix in cases when
the mixed model equations are too large for this to be done. Solve equations (57) for o
and s .
V X
X0 0
s
o
y
0
(57)
(58)
is BLUP of u. It is easy to see why these are true. Eliminate s from equations (57). This
gives
(X0 V1 X) o = X0 V1 y,
which are the GLS equations. Solving for s in (57) we obtain
s = V1 (y X o ).
Then GZ0 s = GZ0 V1 (y X o ), which we know to be BLUP of u.
Some variances and covariances from a g-inverse of the matrix of (57) are shown
below. Let a g-inverse be
!
C11 C12
.
0
C12 C22
Then
V ar(K0 o )
V ar(
u)
0 o
0)
Cov(K , u
Cov(K0 o , u0 )
0 u0 )
Cov(K0 o , u
V ar(
u u)
=
=
=
=
=
=
K0 C22 K.
GZ0 C11 VC11 ZG.
0
K0 C12 VC11 ZG = 0.
0
K0 C12 ZG
0
K0 C12 ZG.
G V ar(
u).
(59)
(60)
(61)
(62)
(63)
(64)
The matrix of (57) will often be too large to invert for purposes of solving s and o .
With mixed model equations that are too large we can solve by Gauss-Seidel iteration.
Because this method requires diagonals that are non-zero, we cannot solve (57) by this
, but not in o , an iterative method can be used.
method. But if we are interested in u
Subsection 4.2 presented a method for BLUP that is
0
= C V y .
u
Now solve iteratively
V s = y ,
21
(65)
then
= C s.
u
(66)
Remember that V has rank = n r. Nevertheless convergence will occur, but not to a
unique solution. V (and y ) could be reduced to dimension, n r, so that the reduced
V would be non-singular.
Suppose that
0
X =
0
C =
1 1 1 1 1
1 2 3 2 4
1 1 2 0 3
2 0 1 1 2
9 3 2 1
8 1 2
9 2
V=
1
2
1
2
8
,
,
y0 = [6 3 5 2 8].
by GZ0 V1 (y X o ).
First let us compute o by GLS and u
The GLS equations are
.335816 .828030
.828030 2.821936
1.622884
4.987475
( o )0 = [1.717054 1.263566].
From this
0 = [.817829 1.027132].
u
By the method of (57) we have equations
9 3 2 1
8 1 2
9 2
1
2
1
2
8
1
1
1
1
1
0
1
2
3
2
4
0
0
s
o
6
3
5
2
8
0
0
22
=
and
y =
0
0 0 0
0
0 0 0
1 2 1 0
0 1 0 1
2 3 0 0
or
0
0
0
0
1
y,
y = T0 y,
y = [0 0 5 1 11].
Using the last 3 elements of y gives
38 11 44
0
0
11 14
V =
, C =
72
Then
1 1 2
3
1 6
= C V1 y = same as before.
u
Another possibility is to compute by OLS using elements 1, 3 of y. This gives
=
and
1.5 0 .5 0 0
.5 0
.5 0 0
y,
9.5 4.0
=
, C =
25.5
.5 1.5 .5
1.5 .5 1.5
.
This gives the same value for u
Finally we illustrate by GLS.
=
0
y =
.780362
.254522 .142119
.645995 .538760
.242894 .036176
.136951 .167959 .310078
y.
3.268734 .852713
.025840 2.85713
.904393
3.744186 .062016
2.005168
0
C =
.940568
.015504
.090439 .984496 .165375
.909561 1.193798 .297158 .193798 .599483
0
0
0
0
.211736 .068145 0
.
.315035 0
0 0
0
0
.
0
0
0
1.049149 .434783
.75000
as before.
This gives the same u
0
16
If R is singular, the usual mixed model equations, which require R1 , cannot be used.
Harville (1976) does describe a method using a particular g-inverse of R that can be used.
Finding this g-inverse is not trivial. Consequently, we shall describe methods different
from his that lead to the same results. Different situations exist depending upon whether
X and/or Z are linearly independent of R.
24
16.1
If R has rank t < n, we can write R with possible re-ordering of rows and columns as
!
R1
R1 L
L0 R1 L0 R1 L
R=
s
o
y
0
(67)
= GZ0 s. See section 14. It should be noted that (67) is not a consistent set of
and u
equations unless
!
y1
y=
.
L0 y1
If X has full column rank, the solution to o is unique. If X is not full rank, K0 o
= GZ0 s is
is unique, given K0 is estimable. There is not a unique solution to s but u
unique.
Let us illustrate with
0
X = (1 2 3), Z =
1 2 3
2 1 3
, y0 = (5 3 8),
3 1 2
4 3
R=
, G = I.
5
Then
8 3 11
9 12
V = R + ZGZ =
,
23
8
3 11
1
3
9 12
2
11 12
23 3
1
2 3
0
25
s1
s2
s3
o
5
3
8
0
1 2
1 2
2 1
3 1
1
4
!1
1 1 2
2 2 1
1 2
o
1 = 1 2
u
u2
2 1
These are
3 1
1
4
0 0 0
+ 0 1 0
0 0 1
!
5
3
51
20 20 19
o
1 = 51
/11.
20 31 19 u
u2
60
19 19 34
111
16.2
In this case V is singular but with X independent of V equations (67) have a unique
solution if X has full column rank. Otherwise K0 o is unique provided K0 is estimable.
In contrast to section 15.1, y need not be linearly dependent upon V and R. Let us use
the example of section 14.1 except now X0 = (1 2 3), and y0 = (5 3 4). Then the unique
solution is (s o )0 = (1104, 588, 24, 4536)/2268.
16.3
Z linearly independent of R
26
17
2 1
1 3
1.4 .6 .8
.6 0
.6
.8
.4
.2
.2
.6
1.2 .2
1.2
(68)
0
0
0
0
2.38739
Let K =
1 1 0
1 0 1
(69)
. Then
V ar
K0 o
u
u
K0
I2
[Matrix (69)] (K I2 )
=
.
1.84685
1.30631
2.38739
3.33333 1.66667
3.198198
V ar
K0 o
27
0
0
0
0
.15315 .30631
.61261
(70)
(71)
.33333
.33333
.33333
0
.04504
.04504 .09009
.35135
.03604
.03604 .07207
.08108
.07207 .07207
.14414 .16216
0
0
0
.21622
.21622
.21622
.02703 .02703 .02703
.05405
.05405
.05405
(72)
computed by
K0
I2
Contribution of R to V ar
[matrix (71)]
K0 o
X0 R1
Z0 R1
1.6667
0
0
0
1.37935
.10348
.20696
=
.08278 .16557
.33114
(73)
For u in
K0 o
K0
I2
Contribution of G to V ar
K0 o
[matrix (72)] Z
.66667
.33333
.44144
.55856
.15315 .15315
.30631
.30631
(74)
1.6667 1.66662
0
0
1.81885 .10348
.20696
=
.07037 .14074
.28143
28
(75)
Then the sum of matrix (73) and matrix (75) = matrix (71). For variance of prediction
errors we need
Matrix (74)
0
0
1
0
0
0
0
1
.66667
.33333
.44144
.55856
.84685 .15315
.30631 .69369
(76)
.
=
1.76406
1.47188
2.05624
(77)
Then prediction error variance is matrix (73) + matrix (77) = matrix (70).
18
In most applications of BLUE and BLUP it is assumed that Cov(u, e0 ) = 0. If this is not
the case, the mixed model equations can be modified to account for such covariances. See
Schaeffer and Henderson (1983).
Let
V ar
e
u
R
S0
S
G
(78)
Then
V ar(y) = ZGZ0 + R + ZS0 + SZ0 .
(79)
(80)
where T = Z + SG1 ,
V ar
G
0
0
B
(81)
as in the original model, thus proving equivalence. Now the mixed model equations are
X0 B1 X X0 B1 T
T0 B1 X T0 B1 T + G1
X0 B1 y
T0 B1 y
(82)
A g-inverse of this matrix yields the required variances and covariances for estimable
, and u
u.
functions of o , u
B can be inverted by a method analogous to
V1 = R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1
where V = ZGZ0 + R,
B1 = R1 + R1 S(G S0 R1 S)1 S0 R1 .
(83)
0 1
= T0 R1 y
.
T R X T0 R1 T + G1 T0 R1 S
u
0 1
0 1
0 1
0 1
SR y
SR X SR T
SR SG
(84)
This may not be a good set of equations to solve iteratively since (S0 R1 SG) is negative
definite. Consequently Gauss- Seidel iteration is not guaranteed to converge, Van Norton
(1959).
We illustrate the method of this section by an additive genetic model.
X=
1
1
1
1
1
2
1
4
1. .5 .25 .25
1. .25 .25
Z = I4 , G =
1. .5
1.
R = 4I4 ,
2.88625
B=
and
.50625
2.88625
.10125
.10125
2.88625
.10125
.10125
.50625
2.88625
T = T0 =
.
2.2375
.5625
2.2375
30
1.112656 2.225313
.403338
.403338
.403338
.403338
6.864946 .079365
1.097106 .660224
2.869187
3.451184 1.842933
3.451184
1o
2o
u1
u2
u3
u4
7.510431
17.275150
1.389782
2.566252
2.290575
4.643516
6.8
V ar(y) = V =
.5 .25 .25
6.8 .25 .25
.
6.8 .5
6.8
3.461538
7.846280
and
= (4.78722, .98139)0
as before.
Cov(u, y0 ) =
1.90
.50
1.90
.25
.25
1.90
.25
.25
.50
1.90
= GZ0 + S0 .
19
o
w T o
X0 R1 y
Z0 R1 y
(85)
Re-write (85) as
o
X0 R1 X X0 R1 ZT X0 R1 Z
Z0 R1 X M
Z0 R1 Z + G1
"
X0 R1 y
Z0 R1 y
(86)
X0 R1 y T0 Z0 R1 y
Z0 R1 y
C11 C12
0
C12 C22
(87)
. Then
V ar(K0 o ) = K0 C11 K.
w) = C22 .
V ar(w
Hendersons mixed model equations for a selection model, equation (31), in Biomet!
X
rics (1975a) can be derived from (86) by making the following substitutions,
for
B
X, (0 B) for T, and noting that B = ZBu + Be .
We illustrate (87) with the following example.
X=
1
2
1
3
2
1
1
4
Z=
1
2
1
4
1
3
2
1
2
2
1
3
5 1 1 2
6 2 1
R=
7 1
8
3 1
3 1 1
0
4 2
G=
, T = 2 3 , y = (5, 2, 3, 6).
5
2 4
32
2.024882 1.142462
2.077104
2.651904
3.871018
3.184149
1.867133
3.383061
(88)
The solution is
(2.114786, 2.422179, .086576, .757782, .580739).
The equations for solution to and to w = u + T are
2.763701
1.154009
1.822952
2.024882
1.142462
2.077104
17.400932
18.446775
3.184149
1.867133
3.383061
(89)
The solution is
(2.115, 2.422, 3.836, 3.795, 6.040).
+ T o of the previous solution gives w
20
0
I
I
T
[inverse of (88)]
0
I
, = [inverse of (89)]
This section describes first the method used by Henderson (1950) to derive his mixed
model equations. Then a more general result is described. For the regular mixed model
E
u
e
= 0, V ar
u
e
33
G
0
0
R
X
T
y
w
V ar
with
V
C0
C
G
!1
V
C0
C
G
C11 C12
0
C12 C22
Log of f (y, w) is
k[(y X)0 C11 (y X) + (y X)0 C12 (w T)
0
X0 C11 y + T0 C12 y
0
C12 y
(90)
we obtain
Eliminating w
0
o
0
1
X0 (C11 C12 C1
22 C12 )X = X (C11 C12 C22 C12 )y.
1
C11 C12 C1
22 C12 = V .
o
o
= C1
w
22 C12 (y X ) + T .
= C0 V1 (y X o ) + T o .
0
0 1
= BLUP of w because C1
22 C12 = C V .
34
(91)
0 1
To prove that C1
note that by the definition of an inverse C12 V+C22 C0 =
22 C12 = C V
1
0. Pre-multiply this by C1
to obtain
22 and post-multiply by V
0
0 1
0 1
C1
= 0 or C1
22 C12 + C V
22 C12 = C V .
We illustrate the method with the same example as that of section 18.
46
V = ZGZ0 + R =
66 38 74
118 67 117
, C = ZG =
45 66
149
V
ZG
GZ0 G
6 9 13
11 18 18
6 11 10
16 14 21
, we obtain
.160839 .009324
.140637
C12 =
and
.044289
.258741
.006993
.477855
.069930 .236985
.433566 .247086
,
.146853
.002331
.034965 .285159
C22
C11
2.024882 1.142462
=
.
2.077104
Then applying (90) to these results we obtain the same equations as in (89).
The method of this section could have been used to derive the equations of (82) for
Cov(u, e0 ) 6= 0.
f (y, u) = g(y | u) h(u).
35
This method also could be used to derive the result of section 18. Again we make
use of f (y, w) = g(y | w) h(w).
E(y | w) = X + Z(w T).
V ar(y | w) = R.
Then
log g(y | w) h(w) = k[(y X + Zw ZT)0 R1 (y X + Zw ZT)]
+ (w T)0 G1 (w T).
This is maximized by solving equations (87).
36
Chapter 6
G and R Known to Proportionality
C. R. Henderson
1984 - Guelph
In the preceding chapters it has been assumed that V ar(u) = G and V ar(e) =
R are known. This is, of course, an unrealistic assumption, but was made in order to
present estimation, prediction, and hypothesis testing methods that are exact and which
may suggest approximations for the situation with unknown G and R. One case does
exist, however, in which BLUE and BLUP exist, and exact tests can be made even when
these variances are unknown. This case is G and R known to proportionality.
Suppose that we know G and R to proportionality, that is
G = G e2 ,
R = R e2 .
(1)
G and R are known, but e2 is not. For example, suppose that we have a one way mixed
model
0
(2)
Multiplying both sides by e2 we obtain a set of equations that can be written as,
X0 V1 X o = X0 V1 y.
(3)
X0 R1
y
0 1
Z R y
(4)
(5)
Also
V ar(K0 o ) = K0 C11 Ke2 ,
where C11 is the upper p2 submatrix of a g-inverse of the matrix of (4). Similarly all of
the results of (34) to (41) in Chapter 5 are correct if we multiply them by e2 .
Of course e2 is unknown, so we can only estimate the variance by substituting some
estimate of e2 , say
e2 , in (5). There are several methods for estimating e2 , but the most
frequently used one is the minimum variance, translation invariant, quadratic, unbiased
estimator computed by
[y0 V1 y ( o )0 X0 V1 y]/[n rank(X)]
(6)
o 0 0 1
0 Z0 R1
[y0 R1
y]/[n rank(X)].
y ( ) X R y u
(7)
or by
A more detailed account of estimation of variances is presented in Chapters 10, 11, and
12.
of (4) is BLUP.
Next looking at BLUP of u under model (1), it is readily seen that u
Similarly variances and covariances involving u and u u are easily derived from the
results for known G and R. Let
!
C11 C12
C12 C22
2
(8)
(9)
(10)
Tests of Hypotheses
Two different types of errors can be made in tests of hypotheses. First, the null hypothesis
may be rejected when in fact it is true. This is commonly called a Type 1 error. Second,
the null hypothesis may be accepted when it is really not true. This is called a Type 2
error, and the power of the test is defined as 1 minus the probability of a Type 2 error.
The results that follow regarding power assume that G and R are known.
The power of the test can be computed only if
1. The true value of for which the power is to be determined is specified. Different
values of give different powers. Let this value be t . Of course we do not know
the true value, but we may be interested in the power of the test, usually for some
minimum differences among elements of . Logically t must be true if the null and
the alternative hypotheses are true. Accordingly a t must be chosen that violates
0
neither H00 = c0 nor Ha = ca .
2. The probability of the type 1 error must be specified. This is often called the chosen
significance level of the test.
3. The value of
e2 must be specified. Because the power should normally be computed
prior to the experiment, this would come from prior research. Define this value as
d.
3
H0 = 0,
where
0 1 0 1
0 0 1 1
H0 =
Suppose we want the power of the test for t = [10, 2, 1, 3] and e2 = 12. That
is, d = 12. Then
(X t )0 = [12, 12, 12, 11, 11, 7, 7, 7, 7].
As we have shown, the reduction under the null hypothesis in this case can be found from
the reduced model E(y) = 0 . The OLS equations are
9
3
2
4
3
3
0
0
2
0
2
0
4
0
0
4
o
to
86
36
22
28
A solution is (0, 12, 11, 7), and reduction = 870. The restricted equations are
9 o = 86,
and the reduction is 821.78. Then s = 48.22 = 4. Let us choose A = .05 as the
significance level
F1 = 2 0 = 2.
F2 = 9 3 = 6.
48.22
P =
= 1.157.
3(12)
Entering Tikus table we obtain the power of the test.
Chapter 7
Known Functions of Fixed Effects
C. R. Henderson
1984 - Guelph
Tests of Estimability
C = 0,
(1)
X=
1
2
1
3
1
2
2
1
3
2
1
1
3
3
4
5
2
3
1
5
0
7
2
5
1
3
1 1
= 0
1
0
0 1
= (1 0) 6= 0.
1
0
0 1
(1 2 2 1)
Now
X
T0
C=
1
2
1
3
1
2
1
2
1
3
2
1
1
2
3
3
4
5
2
3
2
1
5
0
7
2
5
1
3
1
0
1
= 0.
T =
1 2 2 1
2 1 1 3
,
X
T0
= 4. This is because p r =
X0 V1 y
c
(3)
X0 R1 X
X0 R1 Z
T
o
X0 R1 y
0 1
0 1
0 1
1
= Z R y .
0 u
(4)
ZR X ZR Z+G
0
T
0
0
c
0
If T represents p r linearly independent non-estimable functions, o has a unique
solution. A second method where c = 0 is the following. Partition T0 , with re-ordering
of columns if necessary, as
0
0
T0 = [T1 T2 ],
the re-ordering done, if necessary, so that T2 is non-singular. This of course implies that
0
T2 is square. Partition X = [X1 X2 ], where X2 has the same number of columns as T2
and with the same re-ordering of columns as in T0 . Let
0
W = X1 X2 (T02 )1 T1 .
Then solve for o , in either of the following two forms.
W0 V1 W o1 = W0 V1 y.
(5)
W0 R1 W W0 R1 Z
o1
W0 R1 y
=
Z0 R1 W Z0 R1 Z + G1
u
Z0 R1 y
In terms of the model with no definitions on the parameters,
!
E( o1 ) = (W0 V1 W) W0 V1 X.
0
0
o2 = (T2 )1 T1 o1 .
0
E( o2 ) = (T2 )1 T01 E( o1 ).
(6)
(7)
(8)
(9)
Let us illustrate with the same X used for illustrating estimability when T0 is
defined. Suppose we define
0
T =
1 2 2 1
2 1 1 3
, c = 0.
then T0 are non-estimable functions. Consequently the following GLS equations have a
unique solution. It is assumed that V ar(y) = Ie2 . The equations are
2
e
20
16
36
44
1
2
16
20
36
28
2
1
36 44 1 2
36 28 2 1
72 72 2 1
72 104 1 3
2
1 0 0
1
3 0 0
3
46
52
98
86
0
0
2
.
e
(10)
X1 =
0
X2 =
0
T1 =
1 2 1 3 1 2
2 1 3 2 1 1
3 3 4 5 2 3
1 5 0 7 2 5
1 2
2 1
2 1
1 3
, T2 =
then
0
W =
.2 1.6
.2 2.2 .6 1.6
1.0 2.0 1.0 3.0 1.0 2.0
10.4 13.6
13.6 20.0
o1
25.2
46.0
e2 .
.2 1.0
.6
0
380
424
348
228
/72 =
/72.
It is easy to verify that these are estimable under the restricted model.
At this point it should be noted that the computations under T0 = c, where these
represent pr non-estimable functions are identical with those previously described where
the GLS or mixed model equation solution is restricted to M0 o = c. However, all linear
functions of are estimable under the restriction regarding parameters whereas they are
not when these restrictions are on the solution, o . Restrictions M0 o = c are used only
for convenience whereas T0 = c are used because that is part of the model.
Now let us illustrate with our same example, but with only one restriction, that being
(2 1 1 3) = 0.
4
2
e
20
16
36
44
2
16
20
36
28
1
36 44 2
36 28 1
72 72 1
72 104 3
1
3 0
46
52
98
86
0
e2 .
These do not have a unique solution, but one solution is (-88, 0, 272, -32, 0)/144. By the
method of (5)
0
T1 = (2 1 1),
0
T2 = 3.
This leads to
102
68 52 32
o
1 2
9 e 52 116 128 1 = 210 91 e2 .
624
32 128 320
These do not have a unique solution but one solution is (-88 0 272)/144 as in the other
method for o1 .
Sampling Variances
(11)
where C11 is the upper p2 submatrix of a g-inverse of the coefficient matrix. The same is
true for (4).
If the method of (5) is used
0
V ar(K1 o1 ) = K1 (W0 V1 W) K1 .
0
0
0
Cov(K1 o1 , o2 K2 ) = K1 (W0 V1 W) T1 T1
2 K2 .
0
0
0 1
0
o
0 1
V ar(K2 2 ) = K2 (T2 ) T1 (W V W) T1 T1
2 K2 .
(12)
(13)
(14)
If the method of (6) is used, the upper part of a g-inverse of the coefficient matrix is used
in place of (11).
Let us illustrate with the same example and with one restriction. A g-inverse of the
coefficient matrix is
0
0
0
0
0
0
80 32 16
576
2
e
29
1 432
0 32
.
576
1
5
144
0 16
Then
0
80 32 16
0 o
0
29
1
V ar(K ) =
K 0 32
K.
576
0 16
1
5
e2
0
0
0
e2
80 32
,
0
576
0 32
29
which is the same as the upper 3 3 of the matrix above. From (13)
(W0 V1 W) T1 T1
2
0
0
0
0
2
0
1
1
80 32
=
0
1 = 16
576
3
576
0 32
29
1
1
Hypothesis Testing
0
H0
T0
c0
c
for H0 c0 .
(15)
Ha
T0
ca
c
for Ha ca .
6
3
2
1
3
7
1
2
2
1
8
1
1
2
1
9
o =
9
12
15
16
(16)
1 2 1 0
2 1 0 1
6
3
2 1
1 3
3
7
1 2 1 1
2
1
8 1 1 2
1
2
1 9
1 3
1 1 1 1
0 0
3
1
2 3
0 0
a
a
9
12
15
16
0
0
The solution is [-1876, 795, -636, 2035, -20035, 20310]/3643, and the reduction under Ha
is 15676/3643 = 4.3030. Then solve
6
3
2
1
1
2
3
3
7
1
2
2
1
1
2
1
8
1
1
0
2
1
2
1
9
0
1
3
1
2
1
0
0
0
0
2
1
0
1
0
0
0
3
1
2
3
0
0
0
0
0
9
12
15
16
0
0
0
The solution is [-348, 290, -232, 406, 4380, -5302, 5088]/836, and the reduction is 3364/836
= 4.0239. Then we test 4.3030 - 4.0239 = .2791 entering 2 with 1 degree of freedom
0
0
coming from the differences between the number of rows in H0 and Ha .
0
By the method involving V ar(Ho ) and V ar(Ha ) we solve the following equations
and find a g-inverse of the coefficient matrix.
6
3
2
1
3
3
7
1
2
1
2
1
8
1
2
1
2
1
9
3
3
1
2
3
0
o
0
9
12
15
16
0
887
53 55
659 121
407
1012
352
220
616
2024
/4972.
Now
H0 o = [2.82522 1.20434]0 ,
H0 C11 H0 =
.65708 .09735
.27031
and
(H0 C11 H0 )
1.60766 .57895
3.90789
= B,
where C11 is the upper 4 4 submatrix of the inverse of the coefficient matrix. Then
[2.82522 1.20434] B [2.8522 1.20434]0 = 22.44007.
0
Chapter 8
Unbiased Methods for G and R Unknown
C. R. Henderson
1984 - Guelph
Unbiased Estimators
Many unbiased estimators of K0 can be computed. Some of these are much easier
and R
used. Also some of them are invariant to G
(1)
(3)
(4)
(5)
1 X) K.
1 (ZGZ0 + R)R
1 X(X0 R
1 X) X0 R
V ar(K0 o ) = K0 (X0 R
(6)
These methods all would seem to imply that the diagonals of G1 are large relative
to diagonals of R1 .
Other methods would seem to imply just the opposite, that is, the diagonals of
G are small relative to R1 . One of these is OLS regarding u as fixed for purposes of
computation. That is solve
1
X0 X X 0 Z
Z0 X Z0 Z
o
uo
X0 y
Z0 y
(7)
K
M
= (K M0 )CW0 RWC
K
M
(8)
+ M0 GM,
(9)
K
M
(K M )C
e2 .
(10)
o
uo
2
1 y
X0 R
1 y
Z0 R
(11)
K
M
= (K M )CW R RR WC
K
M
+M0 GM.
(12)
o
uo1
X0 y
0
Z1 y
0
(13)
K
M
(14)
W = (X Z1 ), and ZGZ0 refers to the entire Zu vector, and C is some g-inverse of the
matrix of (13).
Let us illustrate some of these methods with a simple example.
X0 = [1 1 1 1 1],
1 1 0 0 0
Z0 = 0 0 1 1 0 , R = 15I, G = 2I,
0 0 0 0 1
3
y0 = [6 8 7 5 7].
17
2
17
V ar(y) = ZGZ0 + R =
0
0
17
0
0
2
17
0
0
0
0
17
V ar( ) = .2 (1 1 1 1 1) Var
(y)
1
1
1
1
1
.2 = 3.72.
2 0 0
2 0
1
o
uo
33
14
12
7
0
.5
0 0
0 0
.
.5 0
1
(k0 m0 )CW0 =
1
1
0
0
1
1
0
0
1
0
1
0
1
0
1
0
(3 1 1 1)
1
0
0
1
0
.5
0 0
0 0
.5 0
1
1
(1 1 1 1 2).
6
Then V ar( o ) = 4 6=
3.333 + .667 = 4 also.
BLUE would be obtained by using the mixed model equations with R = 15I,
G = 2I if these are the true values of R and G. The resulting equations are
1
15
5
2
2
1
2
9.5
0
0
2
0
9.5
0
1
0
0
8.5
o
u
33
14
12
7
/15.
o = 6.609.
The upper 1 1 of a g-inverse is 3.713, which is less than for any other methods, but
of course depends upon true values of G and R.
Unbiased Predictors
The method for prediction of u used by most animal breeders prior to the recent
general acceptance of the mixed model equations was selection index (BLP) with some
Then
estimate of X regarded as a parameter value. Denote the estimate of X by X.
the predictor of u is
0V
1 (y X).
= GZ
u
(15)
and V
are estimated G and V.
G
This method utilizes the entire data vector and the entire variance-covariance structure to predict. More commonly a subset of y was chosen for each individual element of
u to be predicted, and (15) involved this reduced set of matrices and vectors.
is an unbiased estimator of X, E(
Now if X
u) = 0 = E(u) and is unbiased. Even
if G and R were known, (15) would not represent a predictor with minimum sampling
should be a GLS solution. Further,
variance. We have already found that for this
in selection models (discussed in chapter 13), usual estimators for such as OLS or
is no longer an unbiased predictor.
estimators ignoring u are biased, so u
Another unbiased predictor, if computed correctly, is regressed least squares first
reported by Henderson (1948). Solve for uo in equations (16).
X0 X X 0 Z
Z0 X Z 0 Z
o
uo
5
X0 y
Z0 y
(16)
Take a solution for which E(uo ) = 0 in a fixed but random u model. This can be
done by absorbing o to obtain a set of equations
Z0 PZ uo = Z0 Py,
(17)
where
P = [I X(X0 X) X0 ].
Then any solution to uo , usually not an unique solution, has expectation 0, because
E[I X(X0 X) X0 ]y = (X X(X0 X) X0 X) = (X X) = 0. Thus uo is an unbiased
predictor, but not a good one for selection, particularly if the amount of information
differs greatly among individuals.
Let some g-inverse of Z0 PZ be defined as C. Then
V ar(uo ) = CZ0 P(ZGZ0 + R)PZC,
(18)
(19)
Let the ith diagonal of (18) be vi , and the ith diagonal of (19) be ci , both evaluated by
some estimate of G and R. Then the regressed least square prediction of ui is
ci uoi /vi .
(20)
This is BLP of ui when the only observation available for prediction is uoi . Of course other
data are available, and we could use the entire uo vector for prediction of each ui . That
would give a better predictor because (18) and (19) are not diagonal matrices.
In fact, BLUP of u can be derived from uo . Denote (18) by S and (19) by T. Then
BLUP of u is
TS uo ,
(21)
provided G and R are known. Otherwise it would be approximate BLUP.
This is a cumbersome method as compared to using the mixed model equations,
but it illustrates the reason why regressed least squares is not optimum. See Henderson
(1978b) for further discussion of this method.
In the methods presented above it appears that some assumption is made concerning
the relative values of G and R. Consequently it seems logical to use a method that
and R
approach G and R. This would be to substitute
approaches optimality as G
and R
for the corresponding parameters in the mixed model equations. This is a
G
procedure which requires no choice among a variety of unbiased methods. Further, it has
6
and R
are fixed, the estimated sampling variance and
the desirable property that if G
prediction error variances are simple to express. Specifically the variances and covariances
and R = R
are precisely the results in (34) to (41) in Chapter 5.
estimated for G = G
It also is true that the estimators and predictors are unbiased. This is easy to prove
and R
but for estimated (random) G
and R
we need to invoke a result by
for fixed G
and R
note that after
Kackar and Harville (1981) presented in Section 4. For fixed G
absorbing u from the mixed model equations we have
1 y.
1 X o = X0 V
X0 V
Then
1 y)
1 X) X0 V
E(K0 o ) = E(K0 (X0 V
1 X) X0 V
1 X
= K0 (X0 V
= K0 .
Also
1 Z + G
1 )1 Z0 R
1 (y X o ).
= (Z0 R
u
But X o is an unbiased estimator of X, y X o with expectation 0 and consequently
E(
u) = 0 and is unbiased.
We have seen that unbiased estimators and predictors can be obtained even though
G and R are unknown. When it comes to testing hypotheses regarding little is known
except that exact tests do not exist apart from a special case that is described below.
The problem is that quadratics in H0 o c appropriate for exact tests when G and
R
replace
R are known, do not have a 2 or any other tractable distribution when G,
G, R in the computation. What should be done? One possibility is to estimate, if
possible G, R, by ML and then invoke a likelihood ratio test, in which under normality
assumptions and large samples, -2 log likelihood ratio is approximated by 2 . This raises
the question of what is a large sample of unbalanced data. Certainly n is not
a sufficient condition. Consideration needs to be given to the number of levels of each
subvector of u and to the proportion of missing subclasses. Consequently the value of a
2 approximation to the likelihood ratio test is uncertain.
= G and R
= R and
A second and easier approximation is to pretend that G
2
proceed to an approximate test using as described in Chapter 4 for hypothesis testing
with known G, R and normality assumptions. The validity of this test must surely
depend, as it does in the likelihood ratio approximation, upon the number of levels of u
and the balance and lack of missing subclasses.
One interesting case exists in which exact tests of can be made even when we do
not know G and R to proportionality. The requirements are as follows
1. V ar(e) = Ie2 , and
0
V ar(H0 o ) = H0 C11 H0 e2
(22)
where C11 is the upper p p submatrix of a g-inverse of the coefficient matrix. Then
under the null hypothesis versus the unrestricted hypothesis
0
(H0 o )
2e
[H0 C11 H0 ]1 H0 o /s
(23)
Chapter 9
Biased Estimation and Prediction
C. R. Henderson
1984 - Guelph
All methods for estimation and prediction in previous chapters have been unbiased. In this chapter we relax the requirement of unbiasedness and attempt to minimize
the mean squared error of estimation and prediction. Mean squared error refers to the sum
of prediction error variance plus squared bias. In general, biased predictors and estimators exist that have smaller mean squared errors than BLUE and BLUP. Unfortunately,
we never know what are truly minimum mean squared error estimators and predictors
because we do not know some of the parameters required for deriving them. But even
for BLUE and BLUP we must know G and R at least to proportionality. Additionally
for minimum mean squared error we need to know squares and products of at least
proportionally to G and R.
(1)
V + X 2 2 2 X 2 X1
0
X1
0
ZGm + X2 2 2 k2
k1
(2)
a has a unique solution if and only if k1 1 is estimable under a model in which E(y)
contains X1 1 . The analogy to GLS of 1 is a solution to (3).
0
X1 (V + X2 2 2 X2 )1 X1 1 = X1 (V + X2 2 2 X2 )1 y.
1
(3)
2 2 X2 (V + X2 2 2 X2 )1 (y X1 1 )
(4)
K1 1 + K2 2 + M0 u is K1 1 + K2 2 + M0 u .
(6)
We know that BLUE and BLUP can be computed from mixed model equations.
Similarly 1 , 2 , and u can be obtained from modified mixed model equations (7), (8),
0
or (9). Let 2 2 = P. Then with P singular we can solve (7).
0
X1 R1 X1
X1 R1 X2
X1 R1 Z
0
0
0
X1 R1 y
1
0
2 = PX2 R1 y
u
Z0 R1 y
(7)
The rank of this coefficient matrix is rank (X1 )+p2 +q, where p2 = the number of elements
in 2 . The solution to 2 and u is unique but 1 is not unless X1 has full column rank.
Note that the coefficient matrix is non-symmetric. If we prefer a symmetric matrix, we
can use equations (8).
0
X1 R1 X1
X1 R1 X2 P
X1 R1 Z
0
0
0
X1 R1 y
1
0
2 = PX2 R1 y
u
Z0 R1 y
(8)
0
Then 2 = P2 . The rank of this coefficient matrix is rank (X1 ) + rank (P) + q. K1 1 ,
2 , and u are identical to the solution from (7). If P were non-singular we could use
equations (9).
0
X1 R1 X1 X1 R1 X2
X1 R1 Z
0
0
0
X2 R1 X1 X2 R1 X2 + P1 X2 R1 Z
0 1
0 1
0 1
1
Z R X 1 Z R X2
ZR Z+G
1
X1 R1 y
0
2 = X2 R1 y
u
Z0 R1 y
(9)
and P.
These would be used in place of the parameter values in (2) through
say R, G,
(9).
In all of these except (9) the solution to 2 has a peculiar and seemingly undesirable
, where k is some constant. That is, the elements of are
property, namely 2 = k
2
2
2 . Also it should be noted that if, as should always be
proportional to the elements of
the case, P is positive definite or positive semi-definite, the elements of 2 are shrunken
(are nearer to 0) compared to the elements of the GLS solution to 2 when X2 is full
column rank. This is comparable to the fact that BLUP of elements of u are smaller in
absolute value than are the corresponding GLS computed as though u were fixed. This
last property of course creates bias due to 2 but may reduce mean squared errors.
(10)
K1 (X1 V1 X1 + V11 )1 K1 .
(11)
X1 R1 X1 + V11 X1 R1 Z
Z0 R1 X1
Z0 R1 Z + G1
X1 R1 y + V11 1
Z0 R1 y
(12)
The previous methods of this chapter requiring prior values of every element of
and resulting estimates with the same proportionality as the prior is rather distasteful.
A possible alternative solution is to assume a pattern of values of with less than p
3
parameters. For example, with two way, fixed, cross-classified factors with interaction we
might assume in some situations that there is no logical pattern of values for interactions.
Defining for convenience that the interactions sum to 0 across each row and each column,
and then considering all possible permutations of the labelling of rows and columns, the
following is true for the average squares and products of these interactions. Define the
interaction for the ij th cell as ij and define the number of rows as r and the number of
columns as c. The average values are as follows.
2
ij
ij ij 0
ij i0 j
ij i0 j 0
=
=
=
=
,
/(c 1),
/(r 1),
/(c 1)(r 1).
(13)
(14)
(15)
(16)
Then if we have some prior value of we can proceed to obtain locally minimum
mean squared error estimators and predictors as follows. Let P = estimated average
0
value of 2 2 . Then solve equations (7), (8) or (9).
Evaluation Of Bias
If we are to consider biased estimation and prediction, we should know how to evaluate
the bias. We do this by looking at expectations. A method applied to (7) is as follows.
0
Remember that K1 1 is required to have expectation, K1 1 + some linear function of 2 .
0
For this to be true K1 1 must be estimable under a model with X2 2 not existing. 2
and u are required to have expectation that is some linear function of 2 .
Let some g-inverse of the matrix of (7) be
E(K1 1 ) = K1 1 + K1 C1 T 2 ,
where
(17)
(18)
1 X2
X1 R
0
1 X2
T = PX2 R
.
0 1
Z R X2
E( 2 ) = C2 T 2 .
E(u ) = C3 T 2 .
4
(19)
(20)
For K1 1 , bias = K1 C1 T 2 .
For 2 , bias = (C2 T I) 2 .
For u , bias = C3 T 2 .
(21)
(22)
(23)
If the equations (8) are used, the biases are the same as in (21), (22), and (23) except
that (22) is premultiplied by P, and C refers to a g-inverse of the matrix of (8). If the
0
1 X2 , and C refers to the inverse
equations of (9) are used, the second term of T is X2 R
of the matrix of (9).
If we are to use biased estimation and prediction, we should know how to estimate
mean squared errors of estimation and prediction. For the method of (7) proceed as
follows. Let
0
1 X2
X1 R
0
1 X2
(24)
T = PX2 R
.
1 X2
Z0 R
Note the similarity to the second column of the matrix of (7). Let
0
1 Z
X1 R
0
1 Z
S = PX2 R
.
0 1
ZR Z
(25)
Note the similarity to the third column of the matrix of (7). Let
0
1
X1 R
0
1
H = PX2 R
.
0 1
ZR
(26)
Note the similarity to the right hand side of (7). Then compute
C1 T
0
0 0
C2 T I 2 2 (T C1
C3 T
T0 C2 I T0 C3 )
C1 S
0 0
+ C2 S
G (S C1
C3 S I
S0 C2
S0 C3 I)
C1 H
0 0
+
C2 H R (H C1
C3 H
5
H0 C2
H0 C3 )
(27)
M3 )
u u
(M1 M2
= (M1 M2
M1
0
M3 ) B M2 .
M3
(28)
(29)
(30)
2 T I for C2 T I, PC
2 S for C2 S, and PC
2 H for C2 H and proceed as
Substitute PC
I 0 0
I 0 0
0
P
0
C 0 P 0 .
0 0 I
0 0 I
(31)
If the method of (9) is used, delete P from T, S, and H in (24), (25), and (26), let C
be a g-inverse of the matrix of (9), and then proceed as for method (7). When P = P,
thus this linear function is an unbiased estimator. But if we relax the requirement of
unbiasedness, is the above an appropriate definition of estimability? Is any function of
now estimable? It seems reasonable to me to restrict estimation to functions that could
be estimated if we had no missing subclasses. Otherwise we could estimate elements of
that have no relevance to the experiment in question. For example, treatments involve
levels of protein in the ration. Just because we invoke biased estimation of treatments
would hardly seem to warrant estimation of some treatment that has nothing to do with
level of protein. Consequently we state these rules for functions that can be estimated
biasedly.
0
1 1 0 0
1 0 1 0
1 0 0 1
0
t1
t2
t3
0
Further with 1 being just , and K1 being 1, and X1 = (1 1 1), K1 1 is estimable under
a model E(yij ) = .
0
1 1 0
0
0
K1 1 = 1 0 1 t1 .
1 0 0
t2
But the third row represents a non-estimable function. That is, is not estimable under
0
the model with 1 = ( t1 t2 ). Consequently + t3 should not be estimated in this way.
As another example suppose we have a 2 3 fixed model with n23 = 0 and all other
nij > 0. We want to estimate all six ij = + ai + bj + ij . With no missing subclasses
these are estimable, so they are candidates for estimation. Suppose we use priors on .
Then
(K1 K2 )
1
2
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
1
1
1
1
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
a1
a2
b1
b2
b3
y11
y12
y13
y21
y22
y23
+ a1 + b1 + 11
+ a1 + b2 + 12
+ a1 + b3 + 13
+ a2 + b1 + 21
+ a2 + b2 + 22
+ a2 + b 3
Tests Of Hypotheses
Exact tests of hypotheses do not exist when biased estimation is used, but one might
wish to use the following approximate tests that are based on using mean squared error
of K0 o rather than V ar(K0 o ).
7.1
V ar(e) = Ie2
When V ar(e) = Ie2 write (7) as (32) or (8) as (33). Using the notation of Chapter
6, G = G e2 and P = P e2 .
X1 X 1
X1 X 2
0
X0 X2 + I
P X2 X1 P
2
Z0 X1
Z0 X2
X1 Z
1
0
X Z
2 =
P
2
u
Z0 Z + G1
X1 y
X0 y
.
P
2
0
Zy
(32)
0
0
0
X1 Z
X 1 X2 P
X 1 X1
1
0
X 0 X2 P
+ P
P
X0 Z
=
P X 2 X1 P
2
2
u
Z0 X1
Z 0 X2 P
Z0 Z + G1
X1 y
X0 y
P
2
0
Zy
(33)
.
Then 2 = P
2
Let a g-inverse of the matrix of (32) post-multiplied by
I 0 0
0 P 0 Q
0 0 I
or a g-inverse of the matrix (33) pre-multipled and post-multiplied by Q be denoted by
C11 C12
C21 C22
7.2
V ar(e) = R
Let g-inverse of (7) post-multiplied by
I 0 0
0 P 0 Q
0 0 I
or a g-inverse of (8) pre-multiplied and post-multiplied by Q be denoted by
C11 C12
C21 C22
= R, G
= G, and P
= P, K0 C11 K is the mean squared error of K0 , and
Then if R
(K0 c)0 (K0 C11 K)1 (K0 c) is distributed approximately as 2 with s degrees of
freedom under the null hypothesis, K0 = c.
9
Estimation of P
If one is to use biased estimation and prediction, one would usually have to estimate
P, ordinarily a singular matrix. If the elements of 2 are thought to have no particular
pattern, permutation theory might be used to derive average values of squares and products of elements of 2 , that is the value of P. We might then formulate this as estimation
of a variance covariance matrix, usually with fewer parameters than t(t + 1)/2, where t is
the order of P. I think I would estimate these parameters by the MIVQUE method for
singular G described in Section 9 of Chapter 11 or by REML of Chapter 12.
Illustration
We illustrate biased estimation by a 3-way mixed model. The model is
yhijk = rh + ci + hi + uj + eijk ,
r, c, are fixed, V ar(u) = I/10, V ar(e) = 2I.
yhi..
18
13
7
26
9
We want to estimate using prior values of the squares and products of hi . Suppose
this is as follows, ordering i within h, and including 23 .
.1 .05 .05 .1 .05
.05
.1
.05
.05
.1
.1 .05 .05
.1 .05
.1
X1 R1 X1 X1 R1 X2 X1 R1 Z
1
X1 R1 y
0
0
0
0
X2 R1 X1 X2 R1 X2 X2 R1 Z 2 = X2 R1 y
u
Z0 R1 X1 Z0 R1 X2 Z0 R1 Z
Z0 R1 y
10
5 4 1
7 0
1
2
1
0
0
0
1
3
0
3
0
0
3
2
0
0
2
0
0
2
1
0
0
0
1
0
0
1
0
4
4
0
0
0
0
0
4
0
1
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
3
1
3
0
1
2
0
1
1
0
0
4
2
2
3
1
0
1
1
0
2
0
0
0
4
1
2
1
2
0
0
1
0
1
1
0
0
0
3
r
c
38
35
44
22
7
18
13
7
26
9
0
25
27
21
1
2
(34)
I 0 0
0 P 0 T
0 0 I
and adding I to the diagonals of equations (6)-(11) and 10I to the diagonals of equations
(12)-(14) we obtain the coefficient matrix to solve for the biased estimators and predictors.
The right hand side vector is
(19, 17.5, 22, 11, 3.5, .675, .225, .45, .675, .225, .45, 12.5, 13.5, 10.5)0 .
This gives a solution of
r
c
=
=
=
=
(3.6899, 4.8607),
(1.9328, 3.3010, 3.3168),
(.11406, .11406, 0, .11406, .11406, 0),
(.00664, .04282, .03618).
Note that
X
Xi
ij = 0 for i = 1, 2, and
j ij
= 0 for j = 1, 2, 3.
11
.26181 .10042
.02331 0
.15599 .02368
.00368
.05313
.58747 .22911 0
.54493
.07756
.00244
.05783 .26296
.41930 0 .35232 .02259 .00741
.56640
.61368 .64753 0 1.02228
.00080 .03080
.29989
.13633
.02243 0
2.07553
.07567
.04433
.02288
.07836 .02339 0
.07488
.08341 .03341
.02712 .02836
.02339 0
.07512 .03341
.08341
(35)
Upper right 7 7
.02
.08
.03
.03
.12
.05
.05
.02368
.07756
.02259
.00080
.07567
.08341
.03341
.00244
.08 .01180 .01276 .03544
.03080 .03
.02588 .01608 .04980
.04433
.12 .05563
.01213
.00350
.03341
.05 .00199
.00317 .00118
.08341
.05
.00199 .00317
.00118
(36)
Lower left 7 7
.05
.05
0 0
.15
.05
.05
.02288 .07836
.02339 0 .07488 .08341
.03341
.02712
.02836 .02339 0 .07512
.03341 .08341
.05
.05
0 0
.15
.05
.05
.01192
.01408 .04574 0 .08l51 .00199
.00199
.05450 .08524
.05597 0
.05330 .00118
.00118
(37)
Lower right 7 7
.10
.05
.05 .10
0
0
0
.10
0
0
0
.09343
.00537
.00120
.09008
.00455
.09425
12
(38)
0
0
0
0 0 0
.51469
.32450
.51712 0 0
1.20115
.72380 0 0
3.34426
0
0
0 0
(39)
.65838
.68324 0 0
.026 .00474
.03075
0
0 0 0
0
0
0
.14427 .71147 0 0
.01408 .02884 .08524
0
0 0 0
0
0
0
0
0 0 0
0
0
0
(40)
Lower right 7 7
10.38413 0 0
.02657 .04224
.01567
0 0
0
0
0
0
0
0
0
.09343
.00537
.00120
.09008
.00455
.09425
13
(41)
Upper left 7 7
0
0
0
0
0
0
.51469
.32450
.51712
.05497 .00497
1.20115
.72380
.07836 .02836
3.34426
.15324
.04676
.08341 .03341
.08341
(42)
.1
0
.05
.05
.2
.05
.05
.10124 .00124 .1
.026 .00474
.03075
0
0
0
0
0
0
.05497
.00497 .05 .03166 .03907 .02927
.07836
.02836 .05
.01408 .02884 .08524
.15324 .04676
.2 .06743 .00063 .03194
.08341
.03341 .05 .00199
.00317 .00118
(43)
K0 =
6
0
3
3
3
0
6
3
3
3
2
2
6
0
0
2
2
0
6
0
2
2
0
0
6
2
0
3
0
0
2
0
0
3
0
2
0
0
0
3
0
2
3
0
0
0
2
0
3
0
0
2
0
0
3
/6.
Pre-multiplying the upper 11x11 submatrix of either (35) to (38) or (42) to (43) by K0
gives identical results shown in (44).
.76397 .09215
2.51814
(44)
This represents the estimated mean squared error matrix of these 5 functions of .
Next we illustrate with another set of data the relationships of (3), (4), and (5) to
(7). We have a design with 3 treatments and 2 random sires. The subclass numbers are
14
Treatments
1
2
3
Sires
1 2
2 1
1 2
2 0
The model is
yijk = + ti + sj + xijk + eijk .
where is a regression and xijk the associated covariate.
y0 = (5 3 6 4 7 5 4 8),
Covariates = (1 2 1 3 2 4 2 3).
The data are ordered sires in treatments. We shall use a prior on treatments of
2 1 1
2 1
.
2
V ar(e) = 5I, and V ar(s) = I.
We first illustrate the equations of (8),
0
X1 R1 X1 X1 R1 X2 X1 R1 Z
0
0
0
X2 R1 X1 X2 R1 X2 X2 R1 Z =
Z0 R1 X1
Z0 R1 X2 Z0 R1 Z
.6
0
0 .4 .2
.6
0 .2 .4
.4 .4
0
1.0
0
.6
(45)
and
0
X1 R1 y
0
0
X2 R1 y = (8.4 19.0 2.8 3.2 2.4 4.8 3.6) .
Z0 R1 y
1 0 0
0
0 0
1 0
0
0 0
2
1
1
0
2 1 0
2 0
15
0
0
0
0
0
0
1
(46)
we get
1.6
3.6
.6
.6
.4 1.0
.6
3.6
9.6
.8 1.8 1.0 2.2 1.4
.2 1.2 1.2 .6 .4
.2
0
.2
1.8 .6 1.2 .4 .4
.6
,
.4 .6 .6 .6
.8
.2 .6
1.0
2.2
.4
.2
.4 1.0
0
.6
1.4
.2
.4
0
0
.6
(47)
(48)
and
The vector (48) is the right hand side of equations like (8). Then the coefficient matrix
is matrix (47) + dg(0 0 1 1 1 1 1). The solution is
(t )0
(s )0
=
=
=
=
5.75832,
.16357,
(.49697 .02234 .51931),
(.30146 .30146).
6 1 0 1
6 0 1
6 0
V = (ZGZ0 + R) =
0
0
1
0
6
0
0
1
0
1
6
1
1
0
1
0
0
6
1
1
0
1
0
0
1
6
(49)
2 2 2 1 1 1 1 1
2 2 1 1 1 1 1
2 1 1 1 1 1
0
0
2
2
2 1 1
X2 2 2 X2 =
.
2
2
1
1
2 1 1
2
2
2
0
(V + X2 2 2 X2 )1 =
16
(50)
.1444
.0214 .0067 .0067
.0119
.0119
.1542
.0458
.0093
.0093
.1542
.0093
.0093
.1482 .0518
.1482
(51)
4.622698
10.296260
(52)
(y
X1 1 )
1
1
1
1
1
1
1
1
1
2
1
3
2
4
2
3
5.75832
.163572
.59474
2.43117
.40526
1.26760
1.56883
.10403
1.43117
2.73240
2 2 X2 (V + X2 2 2 X2 )1 =
.1426
.1426
.1471 .0725 .0681 .0681 .0899 .0899
.1742
.1270
.1270 .0766 .0766
.0501 .0501 .0973
.
.0925 .0925 .0497 .1017 .0589 .0589
.1665
.1665
Then t = (-.49697 -.02234 .51931)0 as before.
0
GZ0 (V + X2 2 2 X2 )1 =
.0949
.0949 .0097
.1174 .0127 .0127 .0923 .0923
.0053 .0053
.1309 .0345 .1017 .1017 .0304 .0304
10
BLUP, Bayesian estimation, and minimum mean squared error estimation are quite
similar, and in fact are identical under certain assumptions.
17
10.1
Bayesian estimation
0 0
0 G1
and prior on = 0. Then (53) becomes the mixed model equations for BLUE and BLUP.
10.2
Using the same notation as in Section 10.1, the minimum mean squared error estimator is
(W0 R1 W + Q1 ) o = W0 R1 y,
(54)
where Q = C + 0 . Note that if = 0 this and the Bayesian estimator are identical.
The essential difference is that the Bayesian estimator uses prior E(), whereas minimum
MSE uses only squares and products of .
To convert (54) to the situation with prior on 2 but not on 1 , let
0 0
0
1
0
= 0 P
.
0 0
G1
The upper left partition is square with order equal to the number of elements in 1 .
To convert (54) to the BLUP, mixed model equations let
Q =
0 0
0 G1
18
where the upper left submatix is square with order p, the number of elements in . In
the above results P may be singular. In that case use the technique described in previous
sections for singular G and P.
10.3
Under normality and with absolute deviation as the loss function, the Bayesian esti ), where ( o , u
) is the Bayesian solution (also the BLUP
mator of f(, u) is f ( o , u
solution when the priors are on u only), and f is any function. This was noted by Gianola
(1982) who made use of a result reported by DeGroot (1981). Thus under normality any
function of the BLUP solution is the Bayesian estimator of that function when the loss
function is absolute deviation.
10.4
11
Pattern Of Values Of P
When P has the structure described above and consequently is singular, a simpler
method can be used. A diagonal, non-singular P can be written, which when used in
mixed model equations results in the same estimates and predictions of estimable and
predictable functions. See Chapter 15.
19
Chapter 10
Quadratic Estimation of Variances
C. R. Henderson
1984 - Guelph
Xb
i=1
Zi ui .
(1)
q
i=1 i
= q,
(2)
(3)
e0 = (e1 e2 . . . ec ).
V ar(ei ) = Rii rii .
0
Cov(ei , ej ) = Rij rij .
1
(4)
(5)
rii and rij represent variances and covariances respectively. With this model V ar(y) is
V =
Xb
i=1
V ar(u) = G =
Xb
G11 g11
0
G12 g12
..
.
Zi Gij Zj gij + R,
j=1
G12 g12
G22 g22
..
.
(6)
G1b g1b
G2b g2b
..
.
(7)
V ar(e) = R =
R11 r11
0
R12 r12
..
.
R12 r12
R22 r22
..
.
R1c r1c
R2c r2c
..
.
(8)
Treatment 1
1
2
2
1
Sires
2 3
1 2
3 0
X =
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
1
1
1
1
t1 ,
t2
Z1 u1 =
1
1
0
0
0
1
0
0
0
0
0
1
0
0
0
1
1
1
0
0
0
1
1
0
0
0
0
s1
s2
,
s3
Z2 u2 =
1
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
1
ts11
ts12
ts13
ts21
ts22
and
2
.
G11 g11 = I3 s2 , G22 g22 = I5 ts
n = 8, q1 = 2, q2 = 2.
0
0
0
1
1
0
0
0
1 1/2
1/2 1
Z1 u1 =
G11 g11 =
1
1
1
0
0
0
0
0
u1 , Z2 u2 =
g11
,
G12 g12 =
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
u2 ,
1 1/2
1/2 1
g12
,
1 1/2
1/2 1
G22 g22 =
where
g11
g12
g12 g22
g22
,
where
r11
r12
r12
r22
1
0
0
0
0
0
1
0
0
0
0
0
0
1
0
r12
,
).
+ r11
/(g11
is the error variance-covariance matrix for the 2 traits. Then h21 = 4 g11
1/2
(9)
where
G11
G11 0
0
0
G12
0
G12 0
0
= G12 0
0 , ..., Gbb =
0
0
0
0 0
0 Gbb
R11 0
0
0
, R12
(10)
0
R12 0
0
=
0 , etc.
R12 0
0
0
0
Quadratic Estimators
Many methods commonly used for estimation of variances and covariances are quadratic,
unbiased, and translation invariant. They include among others, ANOVA estimators for
4
balanced designs, unweighted means and weighted squares of means estimators for filled
subclass designs, Hendersons methods 1, 2 and 3 for unequal numbers, MIVQUE, and
MINQUE. Searle (1968, 1971a) describes in detail some of these methods.
A quadratic estimator is defined as y0 Qy where for convenience Q can be specified
as a symmetric matrix. If we derive a quadratic with a non-symmetric matrix, say P, we
can convert this to a quadratic with a symmetric matrix by the following identity.
where
y0 Qy = (y0 Py + y0 P0 y)/2
Q = (P + P0 )/2.
(11)
b X
b
X
i=1 j=1
c
c X
X
tr(QRij )rij .
i=1 j=i
We require that the expectation equals ggh . Now if the estimator is translation invariant,
the first term in the expectation is 0 because QX = 0. Further requirements are that
tr(QZGij Z0 ) = 1 if i = g and j = h
= 0, otherwise and
Variances Of Estimators
Searle(1958) showed that the variance of a quadratic estimator y0 Qy, that is unbiased
and translation invariant is
2 tr(QVQV),
(12)
and the covariance between two estimators y0 Q1 y and y0 Q2 y is
2 tr(Q1 VQ2 V)
5
(13)
where y is multivariate normal, and V is defined in (6). Then it is seen that (12) and
(13) are quadratics in the gij and rij , the unknown parameters that are estimated. Consequently the results are in terms of these parameters, or they can be evaluated numerically
for assumed values of g and r. In the latter case it is well to evaluate V numerically for
assumed g and r and then to proceed with the methods of (12) and (13).
Unbiased estimators of variances and covariances with only one exception have positive probabilities of solutions not in the parameter space. The one exception is estimation
of error variance from least squares or mixed model residuals. Otherwise estimates of
variances can be negative, and functions of estimates of covariances and variances can
result in estimated correlations outside the permitted range -1 to 1. In Chapter 12 the
condition required for an estimated variance-covariance matrix to be in the parameter
space is that there be no negative eigenvalues.
An inevitable price to pay for quadratic unbiasedness is non-zero probability that
the estimated variance-covariance matrix will not fall in the parameter space. All such
estimates are obtained by solving a set of linear equations obtained by equating a set
of quadratics to their expectations. We could, if we knew how, impose side conditions
on these equations that would force the solution into the parameter space. Having done
this the solution would no longer yield unbiased estimators. What should be done in
practice? It is sometimes suggested that we estimate unbiasedly, report all such results
and then ultimately we can combine these into a better set of estimates that do fall in the
and R
s2 > 0,
e2 > 0, and
s2 /
e2 < 1/3.
Another point that should be made is that even though
s2 and
e2 are unbiased,
s2 /
e2 is
2
2
2
2
2
2
a biased estimator of s /e , and 4
s /(
s +
e ) is a biased estimator of h .
Form Of Quadratics
Except for MIVQUE and MINQUE most quadratic estimators in models with all
gij = 0 for i 6= j and with R = Ie2 can be expressed as linear functions of y0 y and of
reductions in sums of squares that will now be defined.
Let OLS equations in , u be written as
W0 Wo = W0 y
(14)
where W = (X Z) and
o
uo
(15)
(16)
and correspondingly
1
2
W1 W1 1 = W1 y.
(17)
(1 )0 W1 y.
(18)
Expectations of Quadratics
Let us derive the expectations of these ANOVA type quadratics.
E(y0 y) = tr V ar(y) + 0 X0 X
=
b
X
i=1
(19)
(20)
b
X
n gii + n e2 + 0 X0 X.
(21)
i=1
It can be seen that (15) and (18) are both quadratics in W0 y. Consequently we use
V ar(W0 y) in deriving expectations. The random part of W0 y is
X
W0 Zi ui + W0 e.
(22)
The matrix of the quadratic in W0 y for the reduction under the full model is (W0 W) .
Therefore the expectation is
b
X
(23)
i=1
n gii + r(W)e2 + 0 X0 X.
(24)
i=1
(W1 W1 ) 0
0
0
i=1
n gii +
(26)
and e
Quadratics in u
1 y
X0 R
1 y
Z0 R
(27)
be u
0 Q
Let some quadratic in u
u. The expectation of this is
trQ V ar(
u).
(28)
To find V ar(
u), define a g-inverse of the coefficient matrix of (27) as
C00 C01
C10 C11 .
C0
C1
C.
(29)
(30)
and
1 y) =
V ar(W0 R
Xb
Xb
i=1
Xc
j=1
Xc
i=1
1 Wgij
1 Zi Gij Z R
W0 R
j
(31)
1 R R
1 Wrij .
W0 R
ij
(32)
j=1
be e
0 Q
Let some quadratic in e
e. The expectation of this is
trQ V ar(
e).
(33)
= y X o Z
0 ] and W = (X Z), giving
But e
u = y Wo , where (o )0 = [( o )0 u
1 ]y.
= [I WCW0 R
e
(34)
1 ) [V ar(y)] (I WCW0 R
1 )0 ,
V ar(
e) = (I WCW0 R
(35)
Therefore,
and
V ar(y) =
Xb
Xb
i=1
Xc
i=1
j=1
Xc
Zi Gij Zj gij
(36)
Rij rij .
(37)
j=1
When
G
R
V ar(
u)
V ar(
e)
=
=
=
=
G,
R,
G C11 , and
R WCW0 .
(38) and (39) are used for REML and ML methods to be described in Chapter 12.
(38)
(39)
Hendersons Method 1
We shall now present several methods that have been used extensively for estimation
of variances (and in some cases with modifications for covariances). These are modelled
after balanced ANOVA methods of estimation. The model for these methods is usually
y = X +
Xb
i=1
Zi ui + e,
(40)
where V ar(ui ) = Ii2 , Cov(ui , uj ) = 0 for all i 6= j, and V ar(e) = Ie2 . However, it is
relatively easy to modify these methods to deal with
V ar(ui ) = Gii i2 .
For example, Gii might be A, the numerator relationship matrix.
Method 1, Henderson(1953), requires for unbiased estimation that X0 = [1...1]. The
model is usually called a random model. The following reductions in sums of squares are
computed
0
(41)
(42)
y0 y.
(43)
and
The first b of these are simply uncorrected sums of squares for the various factors
and interactions. The next one is the correction factor, and the last is the uncorrected
sum of squares of the individual observations.
Then these b + 2 quadratics are equated to their expectations. The quadratics of (41)
0
are easy to compute and their expectations are simple because Zi Zi is always diagonal.
Advantage should therefore be taken of this fact. Also one should utilize the fact that the
coefficient of i2 is n, as is the coefficient of any j2 for which Zj is linearly dependent upon
Zi . That is Zj = Zi K. For example the reduction due to sires herds has coefficient
2
n for sh
, s2 , h2 in a model with random sires and herds. The coefficient of e2 in the
0
expectation is the rank of Zi Zi , which is the number of elements in ui .
Because Method 1 is so easy, it is often tempting to use it on a model in which
X 6= (1...1), but to pretend that one or more fixed factors is random. This leads to
biased estimators, but the bias can be evaluated in terms of unknown 0 . In balanced
designs no bias results from using this method.
0
Number of Observations
Sires
Treatment 1 2 3 4 Sums
1
8 3 2 5
18
2
7 4 1 0
12
3
6 2 0 1
9
Sums
21 9 3 6
39
Sums of Observations
Sires
Treatment 1
2 3 4 Sums
1
54 21 13 25 113
2
55 33 8 0
96
3
44 17 0 9
70
Sums
153 71 21 34 279
y0 y = 2049.
The ordinary least squares equations for these data are useful for envisioning Method
1 as well as some others. The coefficient matrix is in (44). The right hand side vector is
(279, 113, 96, 70, 153, 71, 21, 34, 54, 21, 13, 25, 55, 33, 8, 44, 17, 9)0 .
39 18 12 9 21 9 3 6
18 0 0 8 3 2 5
12 0 7 4 1 0
9 6 2 0 1
21 0 0 0
9 0 0
3 0
Red (ts) =
Red (t) =
Red (s) =
C.F. =
E[Red (ts)] =
8
8
0
0
8
0
0
0
8
3
3
0
0
0
3
0
0
0
3
2
2
0
0
0
0
2
0
0
0
2
5
5
0
0
0
0
0
5
0
0
0
5
7
0
7
0
7
0
0
0
0
0
0
0
7
4
0
4
0
0
4
0
0
0
0
0
0
0
4
1
0
1
0
0
0
1
0
0
0
0
0
0
0
1
6
0
0
6
6
0
0
0
0
0
0
0
0
0
0
6
2
0
0
2
0
2
0
0
0
0
0
0
0
0
0
0
2
92
542 212
+
... +
= 2037.56.
8
3
1
1132 962 702
+
+
= 2021.83.
18
12
9
1532
342
+ ... +
= 2014.49.
21
6
2792 /39 = 1995.92.
2
10s2 + 39(s2 + t2 + ts
) + 39 2 .
11
1
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
1
(44)
For the expectations of other reductions as well as for the expectations of quadrat0
ics used in other methods including MIVQUE we need certain elements of W0 Z1 Z1 W,
0
0
W0 Z2 Z2 W, and W0 Z3 Z3 W, where Z1 , Z2 , Z3 refer to incidence matrices for t, s, and ts,
0
respectively, and W = [1 Z]. The coefficients of W0 Z1 Z1 W are in (45), (46), and (47).
Upper left 9 9
549 324 144 81 282 120 48 99 144
324
0 0 144 54 36 90 144
144 0 84 48 12 0
0
81 54 18 0 9
0
149 64 23 46 64
29
10
17
24
5 10 16
26 40
64
(45)
54
54
0
0
24
9
6
15
24
36
36
0
0
16
6
4
10
16
90
90
0
0
40
15
10
25
40
84
0
84
0
49
28
7
0
0
48 12 54 18 9
0 0 0 0 0
48 12 0 0 0
0 0 54 18 9
28 7 36 12 6
16 4 12 4 2
4 1 0 0 0
0 0 6 2 1
0 0 0 0 0
(46)
Lower right 9 9
9 6 15
4 10
25
0 0
0 0
0 0
49 28
16
0
0
0
7
4
1
0 0
0 0
0 0
0 0
0 0
0 0
36 12
4
0
0
0
0
0
0
6
2
1
12
(47)
Upper left 9 9
209 102 66 41 149 29 5 26 64
102 0 0 64 9 4 25 64
66
0
49
16
1
0
0
41 36 4 0 1 0
149 0 0 0 64
29 0 0 0
5 0 0
26 0
64
(48)
9
9
0
0
0
9
0
0
0
4 25 49 16 1 36 4 1
4 25 0 0 0 0 0 0
0 0 49 16 1 0 0 0
0 0 0 0 0 36 4 1
0 0 49 0 0 36 0 0
0 0 0 16 0 0 4 0
4 0 0 0 1 0 0 0
0 25 0 0 0 0 0 1
0 0 0 0 0 0 0 0
(49)
(50)
Lower right 9 9
102 70 59 168 27
66 50 147 36
41 126 18
441 0
81
13
9 36 168
6 30 64
3 0 56
0 6 48
0 0 168
0 0
0
9 0
0
36
0
64
(51)
27
9
12
6
0
27
0
0
0
6 30 147 36 3 126 18 6
4 25 56 12 2 48 6 5
2 0 49 16 1 42 8 0
0 5 42 8 0 36 4 1
0 0 147 0 0 126 0 0
0 0
0 36 0
0 18 0
6 0
0 0 3
0 0 0
0 30
0 0 0
0 0 6
0 0 56 0 0 48 0 0
(52)
Lower right 9 9
9 0
0
0
25
0 12 0 0
0 0 2 0
0 0 0 0
49 0 0 42
16 0 0
1 0
36
6
0
0
0
8
0
0
4
0
0
5
0
0
0
0
0
1
(53)
2
E[Red (t)] = 3e2 + 39t2 + k1 (s2 + ts
) + 39 2 .
102 66 41
k1 =
+
+
= 15.7222.
18
12
9
The numerators above are the 2nd, 3rd, and 4th diagonals of (48) and (51). The denominators are the corresponding diagonals of the least squares coefficient matrix of (44). Also
note that
102 =
n21j = 82 + 32 + 22 + 52 ,
j
2
66 = 7 + 42 + 12 ,
41 = 62 + 22 + 12 .
2
) + 39 2 .
E[Red (s)] = 4e2 + 39s2 + k2 (t2 + ts
149 29 5 26
k2 =
+
+ +
= 16.3175.
21
9
3
6
2
E( C.F.) = e2 + k3 ts
+ k4 t2 + k5 s2 + 39 2 .
209
549
567
k3 =
= 5.3590, k4 =
= 14.0769, k5 =
= 14.5385.
39
39
39
14
e2
R(ts)
R(s)
R(t)
CF
1
10
4
3
1
0
39
16.3175
15.7222
5.3590
0
39
39
15.7222
14.5385
e2
2
ts
s2
t2
39
2
0
39
16.3175
39
14.0769
0
1
1
1
1
e2
2
ts
s2
t2
392
1.
0
0
0
0
.3945
2037.56
.31433
.07302 .06979 .06675
.06352
.01361 .03006
.06979
.02379 .06352
2014.49
.04981 .02894
.02571
.06675 .06352 2021.83
.21453
.45306 1.00251 .92775 2.47720
1995.92
ts
= y0 WQ1 W0 y .31433
e2
where Q1 is a matrix formed from these elements of the inverse just above, (.07302, .06979, -.06675, 06352) and the matrices of quadratics in right hand sides representing
Red (ts), Red (s), Red (t), C.F.
Q1 is dg [.0016, -.0037, -.0056, -.0074, -.0033, -0.0078, -.0233, -.0116, .0091, .0243,
.0365, .0146, .0104, .0183, .0730, .0122, .0365, .0730]. dg refers to the diagonal elements
2
of a matrix. Then the contribution of tt0 to the expectation of
ts
is
0
.0257
tr
.0261 .0004
0
.0004 .0257
tt ,
.0261
that is, .0257 t21 + 2(.0261) t1 t2 2(.0004) t1 t3 .0004 t22 2(.0257) t2 t3 + .0261 t22 .
This is the bias due to regarding t as random. Similarly the quadratic in right hand sides
for estimation of s2 is
dg [-.0016, .0013, .0020, .0026, .0033, .0078, .0233, .0116, -.0038, -.0100, -.0150, -.0060,
-.0043, -.0075, -.0301, -.0050, -.0150, -.0301].
The bias in
s2 is
.0257 .0261
.0004
tr
.0004
0
.0257
tt .
.0261
2
This is the negative of the bias in
ts
.
Hendersons Method 3
Method 3 of Henderson(1953) can be applied to any general mixed model for variance
components. Usually the model assumed is
y = X +
Zi ui + e.
(54)
(55)
(56)
set that will give smallest sampling variance, but this is unknown. Consequently it is
tempting to select the easiest subset. This usually is
Red (, u1 ), Red (, u2 ), ..., Red (, ub ), Red (). For example: Red (, u2 ) is computed as follows. Solve
X0 X X0 Z2
0
0
Z2 X Z2 Z2
o
uo2
X0 y
Z2 0 y
Xs
j=1
trCi Wi Zj Zj Wi j2 + 0 X0 X.
(57)
W1 W 1 W1 W 2
0
0
W2 W 1 W2 W 2
Wi W i =
and
W1 y
0
W2 y
Wi y =
W2 PW2 2 = W2 Py
0
(58)
(59)
(W2 PW2 ) = C.
The coefficient of j2 is
0
(60)
For the reduction due to (, t, s) we can take the subset of OLS equations represented
by rows (and columns) 2-7 inclusive. This gives equations to solve as follows.
18
0 0
12 0
9
8
7
6
21
3
4
2
0
9
2
1
0
0
0
3
t1
t2
t3
s1
s2
s3
113
96
70
153
71
21
(61)
We can delete and s4 because the above is a full rank subset of the coefficient matrix
that includes and s4 . The inverse of the above matrix is
.1717 .1602 .1417 .1593 .1599 .1678
.2399
.1963
.1796
.3131
.1847
.5150
(62)
and this gives a solution vector [5.448, 6.802, 6.760, 1.011, 1.547, 1.100]. The reduction
is 2029.57. The coefficient of s2 in the expectation is 39 since s is included. To find the
2
coefficient of ts
define as T the submatrix of (51) formed by taking columns and rows
2
(2-7). Then the coefficient of ts
= trace [matrix (62)] T = 26.7638. The coefficient of e2
is 6. The reduction due to t and its expectation has already been done for Method 1.
18
th
0 0
0
r1
1
0
(63)
0 Wi Wi 0 o2 = r2 = r.
o3
r3
0 0
0
r = W0 y, where W = (X Z)
Red = r0 Qi r,
where Qi is some g-inverse of the coefficient matrix, (63). Then the coefficient of e2 in
the expectation is
0
rank (Qi ) = rank (Wi Wi ).
(64)
Coefficient of j2 is
tr Qi W0 Zj Zj W.
(65)
e2
Red (1)
..
.
= P
Red (b + 1)
Then the unbiased estimators are
..
= P1
.
b
d
0 X
0X
e2
12
..
.
b2
0 X0 X
e2
Red (1)
..
.
(66)
Red (b + 1)
provided P1 exists. If it does not, Method 3 estimators, at least with the chosen b + 1
reductions, do not exist. In our example
e2
1
10
Red (ts)
=
Red (ts) 6
Red (t)
3
0
39
26.7638
15.7222
e2
2
ts
s2
d
0 X
0X
0
39
39
15.7222
0
1
1
1
.3945
.5240
.0331
2011.89
1.
0
0
0
.32690
.08172 .08172
0
.02618 .03877
.08172 .04296
1.72791 .67542
0 1.67542
19
e2
2
ts
s2
0 X0 X
.3945
2037.56
2029.57
2021.83
10
We now present a very simple method for the general X model provided an easy
g-inverse of X0 X can be obtained. Write the following equations.
Z1 Py
0
Z2 Py
..
.
0
(67)
Zb Py
Let Di be a diagonal matrix formed from the diagonals of Zi PZi . Then compute the
following b quadratics,
0
y0 PZi D1
(68)
i Zi Py.
This computation is simple because D1
is diagonal. It is simply the sum of squares of
i
0
elements of Zi Py divided by the corresponding element of Di . The expectation is also
easy. It is
Xs
0
2
1 0
(69)
PZ
Z
tr
D
Z
qi e2 +
j
j PZi j .
i
i
j=1
0
Because Di1 is diagonal we need to compute only the diagonals of Zi PZj Zj PZi to find
the last term of (69). Then as in Methods 1 and 3 we find some estimate of e2 and equate
18 0 0
X0 X =
12 0
9
and a g-inverse is
0 0
0
0
1
18
0
0
121 0
91
The coefficient matrix of equations like (67) is in (70), (71) and (72) and the right
hand side is (.1111, 4.6111, .4444, -5.1667, 3.7778, 2.1667, .4444, -6.3889, -1, 1, 0, -2.6667,
1.4444, 1.2222)0 .
20
Upper left 7 7
6.7222
.6667
1.0556
1.3333
2.5 .3333
4.4444
1.3333
.8889
2.5 .3333
1.7778
(70)
2.2222
2.9167 2.3333 .5833
2.0 1.3333 .6667
.8333 2.3333
2.6667 .3333 1.3333
1.5556 .2222
3.6111
0
0
0 .6667 .2222
.8889
2.2222
0
0
0
0
0
0
.8333
0
0
0
0
0
0
.5556
0
0
0
0
0
0
(71)
Lower right 7 7
3.6111
0
0
0
2.9167 2.3333 .5833
2.6667 .3333
.9167
0
0
0
0
0
0
0
0
0
0
0
0
1.5556 .2222
.8889
(72)
The diagonals of the variance of the reduced right hand sides are needed in this
method and other elements are needed for approximate MIVQUE in Chapter 11. The
2
coefficients of e2 in this variance are in (70), . . . , (72). The coefficients of ts
are in (73),
0
(74) and (75). These are computed by (Cols. 5-14 of 10.70) (same) .
Upper left 7 7
25.75
.39
1.60
7.11
8.83
.22
5.66
.74 3.85
.22
4.37
8.83
.22
4.37
21
(73)
16.30
14.29 12.83 1.46
6.22 4.59 1.63
1.94 12.83
12.67
.17 4.59
4.25
.35
.74 1.46
.17
1.29
0
0
0
18.98
0
0
0 1.63
.35
1.28
16.30
0
0
0
0
0
0
1.94
0
0
0
0
0
0
.74
0
0
0
0
0
0
(74)
Lower right 7 7
18.98
0
0
0
14.29 12.83 1.46
12.67
.17
1.29
0
0
0
0
0
0
0
0
0
0
0
0
4.25
.35
1.28
(75)
The coefficients of s2 are in (76), (77), and (78). These are computed by (Cols 1-4
of 10.70) (same)0 .
Upper left 7 7
71.75
1.67
2.97
28.25
24.57
1.60
10.18
.96 6.81
.14
6.63
27.26
7.11
3.85
8.83
.22
4.37
(76)
26.25
39.83 34.69 5.14
27.31 19.62 7.70
2.07 29.88
29.81
.06 18.26
17.36
.90
.32 4.31
.76
3.55 1.69
1.05
.64
23.86 5.64
4.11
1.53 7.37
1.21
6.16
16.30
16.59 13.63 2.96
12.15 7.51 4.64
1.94 9.53
9.89 .36 5.44
5.85 .41
.74 2.85
.59
2.26
.96
.79
.17
22
(77)
Lower right 7 7
18.98 4.21
3.15
1.06 5.74
.86
4.88
12.67
.17 8.22
7.26
.96
1.29 .72
.26
.46
4.25
.35
1.28
(78)
(5.167)2
.1112
+ +
= 9.170.
9.361
4.5
2
+ 34.2770 s2 , where
The expectation is 4 e2 + 15.5383 ts
47.773
20.265
+ +
.
9.361
4.5
30.019
123.144
+ +
.
34.2770 =
9.361
4.5
15.5383 =
Thus
e2
1
0
0
e2
2
E Red(ts) = 10 35.7262 35.7262 ts .
s2
Red(s)
4 15.5383 34.2770
Then
e2
1
0
0
.3945
.3945
2
ts = .29855
.05120 .05337 23.799 = .6114
.
s2
.01864 .02321
.05337
9.170
.0557
11
Hendersons Method 2
nesting are not permitted. It is a relatively easy method, but usually no easier than
the method described in Sect. 10.10, absorption of , and little if any easier than an
approximate MIVQUE procedure described in Chapter 11.
Method 2 involves correction of the data by a least squares solution to excluding
. Then a Method 1 analysis is carried out under the assumption of a model
y = 1 +
X
i
Zi ui + e.
X a Xa Xa Z a
0
0
Za Xa Za Za
a
ua
Xa y
0
Za y
(79)
Let the upper submatrix (pertaining to a ) of the inverse of the matrix of (79) be denoted
by P. This can be computed as
0
P = [Xa Xa Xa Za (Za Za )1 Za Xa ]1 .
(80)
1 0 y = 1 0 y 1 0 Xa a .
0
0
0
Zi y = Zi y Zi Xa a i = 1, . . . , b.
(81)
(82)
Now compute
24
(83)
(84)
The expectations of these quadratics are identical to those with y in place of y except
for an increase in the coefficient of e2 computed as follows. Increase in coefficient of e2
in expectation of (83) by
trP(X0a 110 Xa )/n.
(85)
Increase in the coefficient of e2 in expectation of (84) is
0
trP(Xa Zi (Zi Zi )1 Zi Xa ).
0
(86)
18
0
12
8 3 2 5
7 4 1 0
21 0 0 0
9 0 0
3 0
6
a
ua
113
96
153
71
21
34
279
153
71
21
34
18 12
8 7
3 4
2 1
5 0
1.31154
.04287
302.093
163.192
74.763
23.580
40.558
Then the sum of squares of adjusted right hand sides for sires is
(163.192)2
(40.558)2
+ +
= 2348.732.
12
6
25
The adjusted C.F. is (302.093)2 /39 = 2340.0095. P is the upper 2x2 of the inverse of the
coefficient matrix (79) is
P =
.179532 .110888
.200842
21 0 0 0
9 0 0
3 0
6
8 3 2 5
7 4 1 0
9.547619 4.666667
4.444444
8
3
2
5
7
4
1
0
The trace of this (86) is 3.642 to be added to the coefficient of e2 in E (sires S.S). The
trace of P times the following matrix
18
12
39
1
(18 12) =
8.307692 5.538462
3.692308
12
A simple method for testing hypotheses approximately is the unweighted means analysis described in Yates (1934). This method is appropriate for the mixed model described
in Section 4 provided that every subclass is filled and there are no covariates. The smallest subclass means are taken as the observations as in Section 6 in Chapter 1. Then a
conventional analysis of variance for equal subclass numbers (in this case 1) is performed.
The expectations of these mean squares, except for the coefficients of e2 are exactly the
same as they would be had there actually been only one observation per subclass. An
algorithm for finding such expectations is given in Henderson (1959).
The coefficient of e2 is the same in every mean square. To compute this let s =
the number of smallest subclasses, and let ni be the number of observations in the ith
subclass. Then the coefficient of e2 is
Xs
i=1
n1
i /s.
26
(87)
Estimate e2 by
[y 0 y
Xs
i=1
(88)
where yi is the sum of observations in the ith subclass. Henderson (1978a) described a
simple algorithm for computing sampling variances for the unweighted means method.
We illustrate estimation by a two way mixed model,
yijk = ai + bj + ij + eijk .
bj is fixed.
V ar(a) = Ia2 , V ar() = I2 , V ar(e) = Ie2 .
Let the data be
A
1
2
3
4
1
5
2
1
2
nij
B
2
4
10
4
1
3
1
5
2
5
1
8
7
6
10
yij.
B
2 3
10 5
8 4
9 3
12 8
The mean squares and their expectation in the unweighted means analysis are
df
A
3
B
2
AB 6
MS
9.8889
22.75
.3056
E(ms)
.475 e2 + 2 + 3a2
.475 e2 + 2 + Q(b)
.475 e2 + 2
Suppose
e2 estimated as described above is .2132. Then
27
13
Section 2.c in Chapter 4 described a general method for testing the hypothesis, K0 =
0 against the unrestricted hypothesis. The mean square for this test is
( o )0 K(K0 CK)1 K0 o /f.
C is a symmetric g-inverse of the GLS equations or is the corresponding partition of a
g-inverse of the mixed model equations and f is the number of rows in K0 chosen to have
full row rank. Now as in other ANOVA based methods of estimation of variances we can
compute as though u is fixed and then take expectations of the resulting mean squares to
estimate variances. The following precaution must be observed. K0 u must be estimable
under a fixed u model. Then we compute
(uo )0 K(K0 CK)1 K0 uo /f,
(89)
o
uo
X0 y
Z0 y
(90)
(91)
This method seems particularly appropriate in the filled subclass case for then with interactions it is relatively easy to find estimable functions of u. To illustrate, consider the
two way mixed model of Section 11. Functions for estimating a2 are
a1 + 1. a4 4.
2. a4 4.
a2 +
/3.
a3 + 3. a4 4.
Functions for estimating 2 are
[ij i3 4j + 34 ]/6; i = 1, 2, 3; j = 1, 2.
This is an example of a weighted square of means analysis.
The easiest solution to the OLS equations for the 2 way case is ao , bo = null and
= yij . Then the first set of functions can be estimated as i.o 4.o (i = 1, 2, 3).
Reduce K0 to this same dimension and take C as a 12 12 diagonal matrix with diagonal
elements = n1
ij .
ijo
28
Chapter 11
MIVQUE of Variances and Covariances
C. R. Henderson
1984 - Guelph
1 y
X0 R
1 y
Z0 R
(1)
(2)
Xk
i=1
Vi i .
(3)
Then
=
V
Xk
i=1
Vi i ,
(4)
where i are prior values of i . The i are unknown parameters and the Vi are known
matrices of order n n. He proved that MIVQUE of is obtained by computing
1 Vi V
1 (y X o ), i = 1, . . . , k,
(5)
(y X o )0 V
equating these k quadratics to their expectations, and then solving for . o is any
solution to equations
1 y.
1 X o = X0 V
(6)
X0 V
In this section we show that other quadratics in y X o exist which yield the same
estimates as the La Motte formulation. This is important because there may be quadratics
easier to compute than those of (5), and their expectations may be easier to compute.
Let the k quadratics of (5) be denoted by q. Let E(q) = B, where B is k k. Then
provided B is nonsingular, MIVQUE of is
= B1 q.
(7)
Let H be any k k nonsingular matrix. Compute a set of quadratics Hq and equate to
their expectations.
E(Hq) = HE(q) = HB.
(8)
Then an unbiased estimator is
o = (HB)1 Hq
= B1 q = ,
(9)
V of LaMotte =
G11 g11
0
G12 g12
0
G13 g13
..
.
G12 g12
G22 g22
0
G23 g23
..
.
2
G13 g13
G23 g23
0
Z
G33 g33
..
.
R11 r11
0
R12 r12
0
R13 r13
..
.
or V1 1 =
V2 2 =
G11
0
0
..
.
0
0
G12
0
..
.
R11
0
0
..
.
0
0
0
..
.
R13 r13
R23 r23
R33 r33
..
.
R12 r12
R22 r22
0
R13 r23
..
.
= ZGZ0 + R.
0 0
0 0
0
Z g11 ,
0 0
.. ..
. .
G12
0
0
..
.
0
0
0
Z g12 ,
0
..
.
(10)
(11)
(12)
etc., and
Vb+1 b+1 =
Vb+2 b+2 =
0
0
R12
0
..
.
R12
0
0
..
.
0
0
r11 ,
0
..
.
0
0
r12 ,
0
..
.
(13)
etc. Define the first b(b + 1)/2 of (12) as ZGij Z0 and the last c(c + 1)/2 of (13) as Rij .
Then for one of the first b(b + 1)/2 of La Mottes quadratic we have
1 ZG Z0 V
1 (y X o ).
(y X o )0 V
ij
(14)
1 ZG
G
1 G G
1 GZ
0V
1 (yX o ).
(y X o )0 V
ij
(15)
Write this as
G
1 = I. Now note that GZ
0V
1 (y X o ) = u
= BLUP
This can be done because G
(16)
(17)
and R = R.
Taking into account that
is BLUP of e given that G = G
where e
G12 g12
G22 g22
..
.
G11 g11
0
G = G12 g12
..
.
1 =
C11
0
C12
0
C13
..
.
C12
C22
0
C23
..
.
C13
C23
= [C1 C2 C3 . . .].
C33
..
.
For example,
C2 =
C12
C22
0
C23
..
.
Then
1 G G
1 = Ci Gii C0 .
G
i
ii
1 = Ci Gij C0 + Cj G0 Ci for i 6= j.
1 G G
G
j
ij
ij
(18)
(19)
G =
G11 g11
0
0
0
G22 g22
0
,
0
0
G33 g33
..
..
..
.
.
.
and
1
G1
0
11 g11
1 1
0
G
g
.
=
22 22
..
..
.
.
G1
become
Then the quadratics in u
2
i G1
,
u
ii gii u
or an alternative is obviously
i G1
i,
u
ii u
4
(20)
can be converted to
Similarly if all rij = 0, the quadratics in e
0
i R1
i .
e
ii e
(21)
The traditional mixed model for variance components reduces to a particularly simple
form. Because all gij = 0, for i 6= j, all Gii = I, and R = I, the quadratics can be written
as
0
iu
i i = 1, . . . , b, and e
0 e
.
u
Pre-multiplying these quadratics by
we obtain
1
0
..
.
0
1
..
.
0
0
e2 /12 e2 /22 1
1u
1
u
0
2u
2
u
..
.
0 e
+
e
P e2
i 2
i
iu
i
u
y0 = [y1 y2 . . .].
0
yi
is the vector of records on the ith trait.
0
0
u0 = (u1 u2 . . .).
ui
is the vector of breeding values for the ith trait.
0
0
e0 = (e1 e2 . . .).
Every ui vector has the same number of elements by including missing u. Then
Ag11
Ag
12
G =
..
.
where
Ag12
Ag22 = A G0 ,
..
.
g11
g
G0 =
12
..
.
(22)
g12
g22
..
.
is the additive genetic variance- covariance matrix for a non-inbred population, and
denotes the direct product operation.
A1 g 11
A1 g 12
=
..
.
G1
A1 g 12
A1 g 22 = A1 G1
0 .
..
.
(23)
0
1
1 A1 u
u
0 1
1A u
2
! u
0
B11 B12
3
1 A1 u
u
.
0 1
0
B12 B22 u
2
2A u
0
u
1
3
2A u
0
3 A1 u
3
u
g 11 g 11 2g 11 g 12
2g 11 g 13
11 22
12 12
2g g + 2g g
2g 11 g 23 + 2g 12 g 13
=
.
11 33
13 13
2g g + 2g g
B11
g 12 g 12 2g 12 g 13
g 13 g 13
=
2g 12 g 22 2g 12 g 23 + 2g 13 g 22 2g 13 g 23 .
2g 12 g 23 2g 12 g 33 + 2g 13 g 23 2g 13 g 33
B12
g 22 g 22 2g 22 g 23
g 23 g 23
2g 22 g 33 + 2g 23 g 23 2g 23 g 33
=
.
g 33 g 33
B22
i by the inverse of
Premultiplying these quadratics in u
B11 B12
0
B12 B22
we obtain an equiv-
i A1 u
j for i = 1, . . . , 3; j = i, . . . , 3.
u
(24)
I r12
I r22
..
.
I r11
I
R = r12
..
.
0
are e
i e
j for i = 1, . . . , t; j = i, . . . , t.
Then quadratics in e
Computation Of Missing u
rather
In most problems, MIVQUE is easier to compute if missing u are included in u
1
than ignoring them. Section 3 illustrates this with A being the matrix of all quadratics
.
in u
Three methods for prediction of elements of u not in the model for y were described
in Chapter 5. Any of these can be used for MIVQUE. Probably the easiest is to include
the missing ones in the mixed model equations.
Traits
Combinations 1 2 3
1
X X X
2
X X 3
X - There are t possible combinations for t traits.
1 for animals with the same traits measured will be identical. Thus
The block of R
1 ,
if 50 animals have traits 1 and 2 recorded, there will be 50 identical 2 2 blocks in R
and only one of these needs to be stored.
. All of the quadratics
The same principle applies to the matrices of quadratics in e
0
i Q
i refers to the subvector of e
pertaining to the ith animal.
are of the form e
ei , where e
But animals with the same record combinations, will have identical matrices of quadratics
for estimation of a particular variance or covariance. The computation of these matrices
1 be P, which is symmetric
is simple. For a particular set of records let the block in R
and with order equal to the number of traits recorded. Label rows and columns by trait
number. For example, suppose traits 1, 3, 7 are recorded. Then the rows (and columns)
of P are understood to have labels 1, 3, 7. Let
P (p1 p2 p3 . . .),
where pi is the ith column vector of P. Then the matrix of the quadratic for estimating
rii is
0
pi pi .
(25)
The matrix for estimating rij (i 6= j) is
0
pi pj + pi pj .
(26)
Let us illustrate with an animal having records on traits 2, 4, 7. The block of R corresponding to this type of information is
6 4 3
8 5
.
7
Then the block corresponding to R1 is the inverse of this, which is
.27049 .14754
.
.26230
Then the matrix for estimation of r22 is
.25410
.25410
.10656
And e
Expectations Of Quadratics In u
and in e
to their
MIVQUE can be computed by equating certain quadratics in u
expectations. To find the expectations we need a g-inverse of the mixed model coefficient
and R,
prior values, substituted for G and R. The formulas for these
matrix with G
expectations are in Section 6 of Chapter 10. It is obvious from these descriptions of
expectations that extensive matrix products are required. However, some of the matrices
have special forms such as diagonality, block diagonality, and symmetry. It is essential
that these features be exploited. Also note that the trace of the products of several
matrices can be expressed as the trace of the product of two matrices, say trace (AB).
Because only the sum of the diagonals of the product AB is required, it would be foolish
to compute the off-diagonal elements. Some special computing algorithms are
trace (AB) =
X X
i
aij bji
(27)
(28)
(29)
Approximate MIVQUE
MIVQUE for large data sets is prohibitively expensive with 1983 computers because
a g-inverse of a very large coefficient matrix is required. Why not use an approximate ginverse that is computationally feasible? This was the idea presented by Henderson(1980).
The method is called Diagonal MIVQUE by some animal breeders. The feasibility of
this method and the more general one presented in this section requires that an approx 1 X can be computed easily. First absorb o from the mixed
imate g-inverse of X0 R
model equations.
1 X X0 R
1 Z
X0 R
1 Z + G
1
1 X Z0 R
Z0 R
1 y
X0 R
1 y
Z0 R
(30)
This gives
1 ] u
= Z0 Py
[Z0 PZ + G
(31)
1 R
1 X(X0 R
1 X) X0 R
1 , and (X0 R
1 X) is chosen to be symmetric.
where P = R
,
From the coefficient matrix of (31) one may see some simple approximate solution to u
(32)
11 as an approximation to
Interpret C
1 ]1
C11 = [Z0 PZ + G
,
Then given u
= (X0 R
1 X) (X0 R
1 y X0 R
1 Z
u).
Thus an approximate g-inverse to the coefficient matrix is
=
C
00 C
01
C
10 C
11
C
0
C
1
C
(33)
00 = (X0 R
1 X) + (X0 R
1 X) X0 R
1 ZC
11 Z0 R
1 X(X0 R
1 X) .
C
01 = (X0 R
1 X) X0 R
1 ZC
11 .
C
10 = C
11 Z0 R
1 X(X0 R
1 X) .
C
1 y
X0 R
1 y
Z0 R
equals
symmetric.
10
What are some possibilities for finding an approximate easy solution to u and conse 11 ? The key to this decision is the pattern of elements of the matrix
quently for writing C
of (31). If the diagonal is large relative to off-diagonal elements of the same row for every
11 to the inverse of a diagonal matrix formed from the diagonals of the corow, setting C
efficient matrix is a logical choice. Harville suggested that for the two way mixed variance
components model one might solve for the main effect elements of u by using only the
diagonals, but the interaction terms would be solved by adjusting the right hand side for
the previously estimated associated main effects and then dividing by the diagonal. This
11 .
would result in a lower triangular C
The multi-trait equations would tend to exhibit block diagonal dominance if the
11 might well take the form
elements of u are ordered traits within animals. Then C
B1
1
..
.
.
B1
2
..
..
.
.
and
and
where B1
is the inverse of the ith diagonal block, Bi . Having solved for u
i
and e
as in regular
having derived C one would then proceed to compute quadratics in u
MIVQUE. Their expectations can be found as described in Section 7 of Chapter 10 except
is substituted for C.
that C
MIVQUE (0)
(34)
(35)
(37)
and ri Gij rj i=1, . . . , b; j=i, . . . , b, where ri = absorbed right hand side for
Estimate rij from following quadratics
ij e
i R
j
e
uoi
equations.
where
1 X) X0 R
1 ]y.
= [I X(X0 R
e
(38)
The formulation of (16) cannot be used if G is singular, neither can (18) if Gii is
singular, nor (25) if A is singular. A simple modification gets around this problem. Solve
in (51) of Chapter 5. Then for (16) substitute
for
0 G ,
where u
=G
(39)
ij
i Gii
i.
(40)
i A
j
(41)
10
12
X0 y
Z0 y
(42)
are
If o is absorbed, the equations in u
1 )
(Z0 PZ + e2 G
u = Z0 Py,
(43)
where
P = I X(X0 X) X0 .
Let
1 )1 = C.
(Z0 PZ + e2 G
(44)
Then
= CZ0 Py.
u
0 Qi u
= y0 PZC0 Qi CZ0 Py
u
= tr C0 Qi CZ0 Pyy0 PZ.
) = tr C0 Qi C V ar(Z0 Py).
E(
u0 Qi u
V ar(Z0 Py) =
Xb
Xb
i=1
0
+Z
(45)
(46)
(47)
0
j=1
PPZe2 .
(48)
One might wish to obtain an approximate MIVQUE by estimating e2 from the OLS
residual. When this is done, the expectation of the residual is [n r(X Z)] e2 regardless
This method is easier than true MIVQUE and has advantages in
of the value of G.
computation of sampling variances because the estimator of e2 is uncorrelated with the
0 Q
various u
u. This method also is computable with absorption of o .
A further simplification based on the ideas of Section 7, would be to look for some
in (44). Call this solution u
and the corresponding
simple approximate solution to u
approximate g-inverse of the matrix of (44) C. Then proceed as in (46) . . . (48) except
for C.
for u
and C
substitute u
11
Sampling Variances
0 Qi u
, i = 1, . . . , b, where b = number of elements
MIVQUE consists of computing u
0
Qj e
, j = 1, . . . , t, where t = number of elements
of g to be estimated, and e
! of r to
C
be estimated. Let a g-inverse of the mixed model matrix be C
, and let
Cu
W = (X Z).
Then
1 y,
= Cu W0 R
u
1 WC0 Qi Cu W0 R
1 y y0 Bi y,
0 Q
u
u = y0 R
u
1 )y,
= (I WCW0 R
e
13
(49)
and
1 ]0 Qj [I WCW0 R
1 ]y y0 Fj y.
0 Qj e
= y0 [I WCW0 R
e
Let
y0 B1 y
..
(50)
y 0 F1 y
..
.
= P
g
r
g
r
= P, where =
Then MIVQUE of is
y 0 B1 y
..
y 0 H1 y
y 0 H2 y .
=
P1
0
y F1 y
..
.
..
.
(51)
Then
V ar(i ) = 2 tr[Hi V ar(y)]2 .
Cov(i , j ) = 2 trHi [V ar(y)] Hj [V ar(y)].
(52)
(53)
Cov(i , j ) = 2 tr(Hi VH
(54)
(55)
11.1
When R = Ie2 , one can estimate e2 by the residual mean square of OLS and an
approximate MIVQUE obtained. The quadratics to be computed in addition to
e2 are
0 Qi u
. Let
only u
!
!
0 Qi u
P
f
g
.
=
..
E
.
2
0
1
e2
Then
e2
P f
0 1
!1
0 H1 u
u
0 Qi u
u
0
u
H
u
2
..
=
..
.
e2
0
14
s1
e2
s2
e2
..
.
e2
(56)
Then
V ar(
gi ) = 2 tr[Hi V ar(
u)]2 + s2i V ar(
e2 ).
j ) = 2 tr[Hi V ar(
Cov(
gi , g
u) Hj V ar(
u)]
+ si sj V ar(
e2 ).
where
(57)
(58)
V ar(
u) = Cu [V ar(r)]Cu ,
and r equals the right hand sides of mixed model equations.
1 RR
1 W.
1 ZGZ0 R
1 W + W0 R
V ar(
u) = W0 R
(59)
If V ar(r) is evaluated with the same values of G and R used in the mixed model equations,
and R,
then
namely G
1 ZGZ
0R
1 W + W0 R
1 W.
V ar(r) = W0 R
V ar(
e2 ) = 2e4 /[n rank (W)],
(60)
(61)
where
e2 is the OLS residual mean square. This would presumably be evaluated for e2
=
e2 .
12
12.1
15
Upper left 7 7
.0848 .0037
.0071
.0048 .0191
.0815
.0118
.0081
.0069
.1331
.0227
.0169
.1486
.0110
.1582
(62)
.0112
.0085
.0071
.0268
.0273
.0177
.0138
.0178
.0121
.0057 .0147
.0088
.0060
.0126 .0177
.0050
.0089 .0129
.0040
.0061
.0052 .0113 .0004
.0001
.0003
.0009
.0004
.0006
.0063
.0040 .0102
.0058
.0071 .0014 .0039
.0054 .0015
.0025 .0011
.0036 .0003
.0004 .0001
(63)
Lower right 7 7
.1561
.0274
.0164
.1634
.0092
.1744
(64)
is
0 ts
The expectation of (ts)
0
C02 C2 is in (65), (66), and (67). Similarly C01 C1 is in (68), (69), and (70) where C2 , C1
refer to last 10 rows and last 4 rows of (62) to (64) respectively. This leads to expectations
as follows
0
2
ts)
= .23851 e2 + .82246 ts
E(ts
+ .47406 s2 .
2
E(s0s) = .03587 e2 + .11852 ts
+ .27803 s2 .
16
Using
e2 = .3945 leads then to estimates,
2
ts
= .6815,
s2 = .1114.
Upper left 7 7
.0682 .0215
.0977
.0815 .2809
.1134
.1201
.1094
.1012
.01
1.9485
.6920
.5532
2.3159
.4018
2.5645
(65)
.01
.1224
.1065
.1018
.3307
.8063
.5902
.4805
.2567
.1577
.1007
.0017
.2207
.1364
.0689
.1594
.0973 .2418
.1380
.1038
.2466
.0889
.1368 .2127
.0758
.0029
.0046
.1078
.0759 .1837
.1480 .0726
.2062 .1135 .0927
.0411
.1101 .0059
.0085 .0026
(66)
Lower right 7 7
2.3897 1.0959
2.4738
.01
.5144
.1641 .1395 .0247
.4303 .1465
.1450
.0016
2.5572
.8795
.5633
2.7646
.3559
3.0808
(67)
Upper right 7 7
.7271 .0705
.0520
.0382 .1522
.6764
.0814
.0613
.0583
.01
.0840
.0145
.0199
.0510 .0091
.0488
17
(68)
.01
.0732
.0658
.0620
.2010
.0496
.0274
.0198
.1066
.0656
.0411 .0878
.0484
.0393
.0728 .1130
.0402
.0503 .0820
.0317
.0464
.0431 .0896 .0063
.0017
.0046
.0126
.0043
.0083
.0438
.0319 .0757
.0339
.0450 .0111 .0220
.0340 .0120
.0183 .0103
.0286 .0006
.0020 .0014
(69)
Lower right 7 7
.0968 .0025
.0014
.0012 .0261
.0485
.0079
.0335
.0187 .0031
.01
.0335
12.2
.0116
.0377
.0321 .0045
.0335
0
.0014
.0045
.0219 .0117
.0258 .0039
.0156
(70)
Now
18
12.3
Examination of (10.70) to (10.72) shows that a subset of coefficients, namely [sj , ts1j ,
ts2j . . .] tends to be dominant. Consequently one might wish to exploit this structure. If
ij were reordered by i within j and the interactions associated with si placed adjacent
the ts
to si , the matrix would exhibit block diagonal dominance. Consequently we solve for u
in equations with the coefficient matrix zeroed except for coefficients of sj and associated
tsij , etc. blocks. This matrix is in (71, 72, and 73) below.
Upper left 7 7
19.361 0
0
0
16.722 0
0
12.694 0
14.5
4.444
0
0
0
9.444
0
2.5
0
0
0
7.5
0
0
1.778
0
0
0
6.778
(71)
0
0
0
3.611
0
0
0
2.917
0
2
0
0
0
0
0
2.667
0
0
0
0
0
0
0
.917
0
0
0
0
2.
0
0
0
0
0
0
0
1.556
0
0
0
0
0
0
0
0
.889
0
0
0
(72)
Lower right 7 7
dg (8.611, 7.9167, 7.6667, 5.9167, 7.0, 6.556, 5.889)
19
(73)
A matrix like (71) to (73) is easy to invert if we visualize the diagonal blocks with reordering. For example,
19.361 4.444 2.917 2.000
9.444 0
0
7.917 0
7.000
.1201
.0111
.0086
.
=
.1350
.0067
.1481
This illustrates that only 42 or 32 order matrices need to be inverted. Also, each of those
has a diagonal submatrix of order either 3 or 2. The resulting solution vector is
(-.0343, .2192, .0271, -.2079, .4162, .2158, .0585, -.6547, -.1137, .0542, -.0042, -.3711,
.1683, .2389).
0 ts
= .8909 with expectation
This gives (ts)
2
.32515 e2 + 1.22797 ts
+ .79344 s2 ,
12.4
19.361
0
0
0
0
0
0
0
16.722
0
0
0
0
0
0
0
12.694 0
0
0
0
0
0
0
14.5
0
0
0
4.444
0
0
0 9.444 0
0
0
2.5
0
0
0
7.5
0
0
0
1.778
0
0
0 6.778
20
(74)
Lower left 7 7
0
0
0 3.611 0 0 0
2.917
0
0
0
0 0 0
0
2.667 0
0
0 0 0
0
0
.917
0
0 0 0
2.0
0
0
0
0 0 0
0
1.556 0
0
0 0 0
0
0
0
.889 0 0 0
(75)
(76)
Lower right 7 7
This matrix is particularly easy to invert. The inverse has the zero elements in exactly
the same position as the original matrix and one can obtain these by inverting triangular
blocks illustrated by
19.361
0
0
0
4.444 9.4444
0
0
2.917
0
7.917
0
2.000
0
0
7.000
.0516
0
0
0
.0243 .1059
0
0
.0190
0
.1263
0
.0148
0
0
.1429
13
21
R and the Gi are known, and we wish to estimate e2 and the i2 . The mixed model
equations can be written as
X0 R1
X
0
1
Z1 R X
0
Z2 R1
X
..
.
X0 R1
Z1
0
1
Z1 R Z1 + G1
1 1
0
1
Z2 R Z1
..
.
X0 R1
...
Z2
0
1
Z1 R Z2
...
0
1
1
Z2 R Z2 + G2 2 . . .
..
.
o
1
u
2
u
..
.
X0 R1
y
0
Z1 R1
y
0
Z2 R1
y
..
.
(77)
0 R1
, u
i G1
i (i = 1, 2, . . .).
e
e
i u
0 R1
= y0 R1
But because e
e
y - (soln. vector) (r.h.s. vector)
,
G1 u
u
i i i i i
and
0
i (i = 1, 2, . . .).
i G1
u
i u
14
We assume treatments fixed with means t1 , t2 respectively. The three sires are a random
sample of unrelated sires from some population. Sire 1 had one progeny on treatment
1, and 2 different progeny on treatment 2, etc. for the other 2 sires. The sire and error
variances are different for the 2 treatments. Further there is a non-zero error covariance
22
between treatments. Thus we have to estimate g11 = sire variance for treatment l, g22
= sire variance for treatment 2, g12 = sire covariance, r11 = error variance for treatment
1, and r22 = error variance for treatment 2. We would expect no error covariance if the
progeny are from unrelated dams as we shall assume. The record vector ordered by sires
in treatments is [2, 3, 5, 7, 5, 9, 6, 8, 3].
We first use the basic La Motte method.
1 0 0 0
1 1 0
1 0
V1 pertaining to g11 =
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
1
0 0 0 1
0 0 0
0 0
V2 pertaining to g12 =
0 0 0 0
0 0 0
0 0
V3 pertaining to g22 =
23
Use prior values of g11 = 3, g12 = 2, g22 = 4, r11 = 30, r22 = 35. Only the proportionality
of these is of concern. Using these values
=
V
33
0
33
0
3
33
2
0
0
39
2
0
0
4
39
0
2
2
0
0
39
0
2
2
0
0
4
39
0
0
0
0
0
0
0
39
0
0
0
0
0
0
0
4
39
1 Vi V
1 we obtain the following values for Qi (i=1, . . . ,5). These are in
Computing V
the following table (times .001), only non-zero elements are shown.
Element
(1,1)
(1,4),(1,5)
(2,2),(2,3)
(2,6),(2,7)
(3,3)
(3,6),(3,7)
(4,4)
(4,5)
(5,5)
(6,6)
(6,7)
(7,7)
(8,8),(9,9)
(8,9)
Q1
Q2
Q3
Q4
Q5
.92872 -.17278 .00804 .92872 .00402
-.04320 .71675 -.06630 -.04320 -.03315
.78781 -.14657 .00682 .94946 .00341
-.07328 .66638 -.06135 -.03664 -.03068
.78781 -.14657 .00682 .94946 .00341
-.07328 .66638 -.06135 -.03664 -.03068
.00201 -.06630 .54698 .00201 .68165
.00201 -.06630 .54698 .00201 -.13467
.00201 -.06630 .54698 .00201 .68165
.00682 -.12271 .55219 .00341 .68426
.00682 -.12271 .55219 .00341 -.13207
.00682 -.12271 .55219 .00341 .67858
0
0 .54083
0 .67858
0
0 .54083
0 -.13775
24
.229319
.862389
.00156056 .00029034
.00001350
.00117042 .000006752
.00435858
.00001012
.00217929
.00198893
.00000506
.00353862
25
g11
g12
g22
r11
r22
.00270080
.00462513
.00423360
.00424783
.01762701
This gives the solution [.500, 1.496, -2.083, 2.000, 6.333]. Note that the gij do not fall in
the parameter space, but this is not surprising with such a small set of data.
Next we illustrate with quadratics in u1 , . . . , u5 and e1 , . . . , e9 using the same priors
as before.
G11 = dg (1, 1, 0, 0, 0),
0 0 1 0 0
0 0 1 0
0 0 0
G12 =
,
0 0
0
G22
=
= dg (0, 0, 1, 1, 1), R
30 I 0
0 35 I
3 0 2 0 0
3 0 2 0
=
4 0 0
G
4 0
4
are
From these, the 3 matrices of quadratics in u
.25
0 .125
0
.25
0
.125
.0625
0
.0625
and
0
0
0
0
0
.0625
.25
0
.25
.25
0
.1875
0
.09375
.0625 .09375
.140625
are
Similarly matrices of quadratics in e
dg (.00111111, .00111111, .00111111, 0, 0, 0, 0, 0, 0),
and
dg (0, 0, 0, 1, 1, 1, 1, 1, 1)*.00081633.
26
0
0
0
0
0
0
0
0
0
.0625
0
.25
0
.1875
0
0
0
.140625
.1
0
.03333 .06667
0
0
0
.17143
0
0
.05714 .05714 .05714
.53333
0
.25
0
0
.56667
0
.25
0
.
.43214
0
0
.43214
0
.30714
.006179 .006179
.003574 .003574 0
.006179
.003574
.003574 0
.002068 .002068 0
=
g11
.002068 0
0
.009191 .009191
.012667 .012667 0
.009191
.012667
.012667 0
.011580 .011580 0
+
g12
.011580 0
.004860 .001976
.010329 .004560 .005769
.004860 .004560
.010329 .005769
g22
.021980 .011538
.023076
.004634 .004634
.002681 .002681 0
.004634 .002681
.002681 0
.001551 .001551 0
+
r11
.001551 0
27
.002430 .000988
.005164 .002280 .002884
.002430 .002280
.005164 .002884
.010990 .005769
.011538
e0 = [1.1024, 4488, 1.5512, .7885, 1.2115, 2.3898, .6102, 1.8217, 3.1783].
1 .
Let C be a g-inverse of the mixed model coefficient matrix, and T = I WCW0 R
Then
V ar(
e) = T(ZG11 Z0 g11 + ZG12 Z0 g12 + ZG22 Z0 g22 + R11 r11 + R22 r22 )T0
.702 .351 .351 .038 .038
.031
.038 0
.176
.176
.019
.019 .019 .019 0
.176
.019
.019 .019 .019 0
.002
.002 .002 .002 0
.002
.002 0
.002 0
.131
0
0
0
0
0
0
0
0
0
g11
.065
.065
.489
.489 .489 .489 0 0
.033 .033 .245 .245
.245
.245 0 0
.053 .053
.053
.053 0 0
.053
.053
.053 0 0
g12
.053 .053 0 0
.053 0 0
0 0
0
.002
.002
.023
.023 .023 .023
0
.002
.023
.023
.023
.023
0
.447
.447 .226 .226 .221
.447
.447 .221
.447 .221
.442
28
0
0
0
.221
.221
.221
.221
.442
.442
g22
.632 .369
.014
.014 .014 .014 0
.632
.014
.014 .014 .014 0
.002
.002 .002 .002 0
.002
.002 0
.002 0
.001
.001
.011
.011 .011 .011
0
.001
.011
.011 .011 .011
0
.723
.277
.113
.113
.110
.723 .110
.721
0
0
0
0
0
0
0
0
0
0
0
0
.110
.110
.110
.110
.279
.721
r11
r22 .
0 0 0 1
0 0 0
0 0
0 dg (1 1 1 0 0 0) u
, u
0
u
0
1
0
0
0
0
0
1
0
0
0
u
,
0 dg (0, 0, 0, 1, 1, 1) u
and equate to expectations we obtain exactly the same
and u
estimates as in the other three methods. We also could have computed the following
rather than the ones used, namely
quadratics in e
0 dg (1 1 1 0 0 0 0 0 0)
0 dg (0 0 0 1 1 1 1 1 1) e
.
e
e and e
Also we could have computed an approximate MIVQUE by estimating r11 from within
sires in treatment 1 and r22 from within sires in treatment 2.
In most problems the error variances and covariances contribute markedly to computational labor. If no simplification of this computation can be effected, the La Motte
29
1 is
quadratics might be used in place of quadratics in e. Remember, however, that V
1 , G
1
usually a large matrix impossible to compute by conventional methods. But if R
1 Z + G
1 )1 are relatively easy to compute one can employ the results,
and (Z0 R
1 Z + G
1 )1 ZR
1 .
1 = R
1 R
1 Z(Z0 R
V
can be derived
As already discussed, in most genetic problems simple quadratics in u
usually of the form
0
iu
j or u
i A1 u
j.
u
Then these might be used with the La Motte ones for the rij rather than quadratics in e
o
o
for the rij . The La Motte quadratics are in (y X ), the variance of y X being
1 X) X0 V
1 ]V[I X(X0 V
1 X) X0 V
1 ]0 .
[I X(X0 V
6= V in general, and V should be written in terms of gij , rij for
Remember that V
purposes of taking expectations.
15
The MIVQUE estimators of this chapter are translation invariant and unbiased. La
Motte also presented other estimators including not translation invariant biased estimators
and translation invariant biased estimators.
15.1
1 (y X)/(n
i = i (y X)
+ 2),
,
and V
are priors. This can also be computed as
where ,
0R
u
1 (y X)
1 (y X)]/(n
0 Z0 R
i = i [(y X)
+ 2),
where
1 Z + G
1 )1 Z0 R
1 (y X).
= (Z0 R
u
are used as priors.
The lower bound on MSE of i is 2i2 /(n + 2), when V,
30
15.2
bound on MSE of i is 2i /(n r + 2) when V is used as the prior for V. The lower bound
on i for the translation invariant, unbiased MIVQUE is 2di , when di is the ith diagonal
th
of G1
element of G0 is trW0 Vi W0 Vi for
0 and the ij
1 V
1 X(X0 V
1 X) X0 V
1 .
W0 = V
The estimators of sections 15.1 and 15.2 have the peculiar property that i /j = i /j .
Thus the ratios of estimators are exactly proportional to the ratios of the priors used in
the solution.
16
Expectations Of Quadratics In
C1
C2
Then
= C2 r, where r is the right hand vector of (5.51), and
0 Q)
= trQ V ar()
E(
0
= trQC2 [V ar(r)]C2 .
!
1 Z
X0 R
0 1
0 1
V ar(r) =
0 R1 Z G (Z R X Z R ZG)
GZ
+
1
X0 R
0R
1
GZ
1 X R
1 ZG).
R (R
(78)
31
X0 y
G Z0 y
(79)
X0 Z
Z0 Z
G
+
) 2
G (Z0 X Z0 ZG
e
X0 X
X 0 ZG
0
0
Z X G
Z ZG
32
e2 .
(80)
Chapter 12
REML and ML Estimation
C. R. Henderson
1984 - Guelph
Iterative MIVQUE
g11
g12
.
.
g12 ...
r11
r
g22 ...
12
and
.
.
.
.
r12 ...
r22 ...
.
.
in which either G or R is not a valid estimator. (LRS note: Other possibilities are bending
in which eigenvalues are modified to be positive and the covariance matrix is reformed
using the new eigenvalues with the eigenvectors.)
Quadratic, unbiased estimators may lead to solutions not in the parameter space.
This is the price to pay for unbiasedness. If the estimates are modified to force them
into the parameter space, unbiasedness no longer can be claimed. What should be done
1
An alternative algorithm for REML that is considerably easier per round of iteration
than iterative MIVQUE will now be described. There is, however, some evidence that
convergence is slower than in the iterative MIVQUE algorithm. The method is based on
the following principle. At each round of iteration find the expectations of the quadratics
and r are equal to g and r. This leads
under the pretense that the current solutions to g
to much simpler expectations. Note, however, that the first iterate under this algorithm
is not MIVQUE. This is the EM (expectation maximization) algorithm, Dempster et al.
(1977).
= g and r = r
From Henderson (1975a), when g
V ar(
u) = G C11 .
V ar(
e) = R WCW0 = R S.
(1)
(2)
V ar(
e) = Cov (
e, e0 ) = Cov[(y WCW0 R1 y), e0 ] = R WCW0 .
(3)
= C.
Note that if we proceed as in Section 11.5 we will need only diagonal blocks of WCW0
corresponding to the diagonal blocks of R .
V ar(
u) =
X X
i
(4)
ji
, and e
be the values computed
See Chapter 11, Section 3 for definition of G . Let C, S, u
th
for the k round of iteration. Then solve in the k+l round of iteration for values of g, r
from the following set of equations.
0 Q1 u
+ trQ1 C11
trQ1 G = u
..
.
2
0 Qb u
0 + trQb C11
trQb G = u
0 Qb+1 e
+ trQb+1 S
trQb+1 R = e
..
.
(5)
0 Qc e
+ trQc S.
trQc R = e
Note that at each round a set of equations must be solved for all elements of g , and
another set of all elements of r . In some cases, however, Q s can be found such that
only one element of gij (or rij ) appears on the left hand side of each equation of (5). Note
1 appears in a Qi the value of Qi changes at each round of iteration. The
also that if G
1 appearing in Qi for e
. Consequently it is desirable to find Qi that
same applies to R
isolate a single gij or rij in each left hand side of (5) and that are not dependent upon G
and R. This can be done for the gij in all genetic problems with which I am familiar.
The second algorithm for REML appears to have the property that if positive definite
G and R are chosen for starting values, convergence, if it occurs, will always be to positive
and R.
This suggestion has been made by Smith (1982).
definite G
ML Estimation
A slight change in the second algorithm for REML, presented in Section 2 results
1 Z + G1 )1 . In place
in an EM type ML algorithm. In place of C11 substitute (Z0 R
1 Z + G
1 )1 Z0 . Using a result reported by Laird and Ware
of WCW0 substitute Z(Z0 R
(1982) substituting ML estimates of G and R for the corresponding parameters in the
mixed model equations yields empirical Bayes estimates of u . As stated in Chapter 8
are also ML estimates of the conditional means of u .
the u
If one wishes to use the LaMotte type quadratics for REML and ML, the procedure
is as follows. For REML iterate on
trQj
1 X) X0 .
Vi i = (y X o )0 Qj (y X o ) + trQj X(X0 V
Qj are the quadratics computed by the LaMotte method described in Chapter 11. Also
this chapter describes the Vi . Further, o is a GLS solution.
ML is computed in the same way as REML except that
1 X) X0 is deleted.
trQj X(X0 V
The EM type algorithm converges slowly if the maximizing value of one or more parameters is near the boundary of the parameter space, eg.
i2 0. The result of Hartley and
Rao (1967) can be derived by this general EM algorithm.
3
Approximate REML
(6)
with equations written as (77) in Chapter 11. R and Gi are known. Then if = e2 /i2 ,
as is defined in taking expectations for the computations of Section 2, the expectation of
(6) is
[n rank (X)]e2 .
(7)
What if one has only limited data to estimate a set of variances and covariances,
but prior estimates of these parameters have utilized much more data? In that case it
might be logical to iterate only a few rounds using the EM type algorithm for REML or
ML. Then the estimates would be a compromise between the priors and those that would
be obtained by iterating to convergence. This is similar to the consequences of Bayesian
estimation. If the priors are good, it is likely that the MSE will be smaller than those for
ML or REML. A small simulation trial illustrates this. The model assumed was
yij
X0
ni
V ar(e)
V ar(a)
=
=
=
=
=
Xij + ai + eij .
(3, 2, 5, 1, 3, 2, 3, 6, 7, 2, 3, 5, 3, 2).
(3, 2, 4, 5).
4 I,
I,
4
Cov(a, e0 ) = 0.
5000 samples were generated under this model and EM type REML was carried out with
starting values of e2 /a2 = 4, .5, and 100. Average values and MSE were computed for
rounds 1, 2, ..., 9 of iteration.
Starting Value e2 /a2 = 4
e2
a2
e2 /
a2
Rounds Av. MSE Av. MSE Av. MSE
1
3.98 2.37 1.00
.22 4.18
.81
2
3.93 2.31 1.04
.40 4.44 2.97
3
3.88 2.31 1.10
.70 4.75 6.26
4
3.83 2.34 1.16 1.08 5.09 10.60
5
3.79 2.39 1.22 1.48 5.47 15.97
6
3.77 2.44 1.27 1.86 5.86 22.40
7
3.75 2.48 1.31 2.18 6.26 29.90
8
3.74 2.51 1.34 2.43 6.67 38.49
9
3.73 2.53 1.35 2.62 7.09 48.17
In this case only one round appears to be best for estimating e2 /a2 .
Starting Value e2 /a2 = .5
a2
e2 /
a2
MSE Av. MSE Av. MSE
2.42 3.21 7.27 1.08 8.70
2.37 2.53 4.79 1.66 6.27
2.38 2.20 4.05 2.22 5.11
2.41 2.01 3.77 2.75 5.23
2.43 1.88 3.64 3.28 6.60
2.45 1.79 3.58 3.78 9.20
2.47 1.73 3.54 4.28 12.99
2.49 1.67 3.51 4.76 17.97
2.51 1.63 3.50 5.23 24.11
e2
Rounds
1
2
3
4
5
6
7
8
9
Av.
3.14
3.30
3.40
3.46
3.51
3.55
3.57
3.60
3.61
Starting Value
e2 /
a2
a2
e2
Rounds Av. MSE Av. MSE
1
4.76 4.40 .05
.91
2
4.76 4.39 .05
.90
3
4.76 4.38 .05
.90
4
4.75 4.37 .05
.90
5
4.75 4.35 .05
.90
6
4.75 4.34 .05
.90
7
4.75 4.32 .05
.90
8
4.74 4.31 .06
.89
9
4.74 4.28 .06
.89
= 100
a2
e2 /
Av. MSE
.99 9011
.98 8818
.97 8638
.96 8470
.95 8315
.94 8172
.92 8042
.91 7923
.90 7816
Convergence with this very high starting value of e2 /a2 relative to the true value of 4 is
very slow but the estimates were improving with each round.
Statisticians and users of statistics have for many years discussed the problem of
estimates of variances that are less than zero. Most commonly employed methods
of estimation are quadratic, unbiased, and translation invariant, for example ANOVA
estimators, Methods 1,2, and 3 of Henderson, and MIVQUE. In all of these methods there
is a positive probability that a solution to one or more variances will be negative. Strictly
speaking, these are not really estimates if we define, as some do, that an estimate must
lie in the parameter space. But, in general, we cannot obtain unbiasedness unless we are
prepared to accept such solutions. The argument used is that such estimates should
be reported because eventually there may be other estimates of the same parameters
obtained by unbiased methods, and then these can be averaged to obtain better unbiased
estimates.
Other workers obtain truncated estimates. That is, given estimates
12 , ...,
q2 , with
2
say
q2 < 0, the estimates are taken as
12 , ...,
q1
, 0. Still others revise the model so that
the offending variable is deleted from the model, and new estimates are then obtained of
the remaining variances. If these all turn out to be non-negative, the process stops. If
some new estimate turns negative, then that variance is dropped from the model and a
new set of estimates obtained.
These truncated estimators can no longer be defined as unbiased. Verdooren (1980) in
an interesting review of variance component estimation uses the terms permissible and
impermissible to characterize estimators. Permissible estimators are those in which the
solution is guaranteed to fall in the parameter space, that is all estimates of variances are
e1j
e2j
c11 c12
c12 c22
The last of these criteria insures that the estimated correlation between e1j and e2j falls
in the range -1 to 1. The literature reporting genetic correlation estimates contains many
cases in which the criteria are not met, this in spite of probable lack of reporting of many
other sets of computations with such results. The problem is particularly difficult when
there are more than 2 variates. Now it is not sufficient for all estimates of variances to
be non- negative and all pairs of estimated correlations to fall in the proper range. The
requirement rather is that the estimated variance-covariance matrix be either positive
definite or at worst positive semi-definite. A condition guaranteeing this is that all latent
roots (eigenvalues) be positive for positive definiteness or be non-negative for positive
semidefiteness. Most computing centers have available a good subroutine for computing
eigenvalues. We illustrate with a 3 3 matrix in which all correlations are permissible,
but the matrix is negative definite.
3 3 4
4 4
3
4
4 6
The eigenvalues for this matrix are (9.563, 6.496, -3.059), proving that the matrix is
negative definite. If this matrix represented an estimated G for use in mixed model
equations, one would add G1 to an appropriate submatrix, of OLS equations, but
G1
.042 .139
.011
=
.147
.126
,
.016
so one would add negative quantities to the diagonal elements, and this would make no
sense. If the purpose of variance-covariance estimation is to use the estimates in setting
up mixed model equations, it is essential that permissible estimators be used.
7
Another difficult problem arises when variance estimates are to be used in estimating
h . For example, in a sire model, an estimate of h2 often used is
2
2 = 4
h
s2 /(
s2 +
e2 ).
By definition 0 < h2 < 1, the requirement that
s2 > 0 and
e2 > 0 does not insure
2 is permissible. For this to be true the permissible range of
that h
s2 /
e2 is 0 to 31 . This
would suggest using an estimation method that guarantees that the estimated ratio falls
in the appropriate range.
In the multivariate case a method might be derived along these lines. Let some
translation invariant unbiased estimator be the solution to
C
v = q,
where q is a set of quadratics and Cv is E(q). Then solve these equations subject to a set
to fall in the parameter space, as a minimum, all eigenvalues
of inequalities that forces v
comprises the elements of the variance-covariance matrix.
0 where v
When G is singular we can use a method for EM type REML that is similar to
Chapter 13
Effects of Selection
C. R. Henderson
1984 - Guelph
Introduction
The models and the estimation and prediction methods of the preceding chapters
have not addressed the problem of data arising from a selection program. Note that
the assumption has been that the expected value of every element of u is 0. What if u
represents breeding values of animals that have been produced by a long-time, effective,
selection program? In that case we would expect the breeding values in later generations
to be higher than in the earlier ones. Consequently the expected value of u is not really 0
as assumed in the methods presented earlier. Also it should be noted that, in an additive
genetic model, Aa2 is a correct statement of the covariance matrix of breeding values if no
selection has taken place and a2 = additive genetic variance in an unrelated, non-inbred,
unselected population. Following selection this no longer is true. Generally variances are
reduced and the covariances are altered. In fact, there can be non-zero covariances for
pairs of unrelated animals. Further, we often assume for one trait that V ar(e) = Ie2 .
Following selection this is no longer true. Variances are reduced and non-zero covariances
are generated. Another potentially serious consequence of selection is that previously
uncorrelated elements of u and e become correlated with selection. If we know the new
first and second moments of (y, u) we can then derive BLUE and BLUP for that model.
This is exceedingly difficult for two reasons. First, because selection intensity varies from
one herd to another, a different set of parameters would be needed for each herd, but
usually with too few records for good estimates to be obtained. Second, correlation of u
with e complicates the computations. Fortunately, as we shall see later in this chapter,
computations that ignore selection and then use the parameters existing prior to selection
sometimes result in BLUE and BLUP under the selection model. Unfortunately, comparable results have not been obtained for variance and covariance estimation, although
there does seem to be some evidence that MIVQUE with good priors, REML, and ML
may have considerable ability to control bias due to selection, Rothschild et al. (1979).
An Example of Selection
We illustrate some effects of selection and the properties of BLUE, BLUP, and OLS
by a progeny test example. The progeny numbers were distributed as follows
Treatments
1
2
10
500
10
100
10
0
10
0
Sires
1
2
3
4
We assume that the sires were ranked from highest to lowest on their progeny averages in
Period 1. If that were true in repeated sampling and if we assume normal distributions,
one can write the expected first and second moments. Assume unrelated sires, e2 = 15,
s2 = 1 under a model,
yijk = si + pj + eijk .
With no selection
y11
y21
y31
y41
y12
y22
p1
p1
p1
p1
p2
p2
2.5 0
0
0
2.5 0
0
2.5 0
V ar =
2.5
1
0
0
0
1.03
0
1
0
0
0
1.15
With ordering of sires according to first records the corresponding moments are
1.628
.460
.460
1.628
.651
.184
+
+
+
+
+
+
p1
p1
p1
p1
p2
p2
.901 .590
.901
and
.262
.395
.614
1.229
.492
.246
.158
.105
.827
.246
.360
.236
.158
.098
.894
.651
.184
.184
.651
and
.
.744 .098
.797
These results are derived from Teicheroew (1956), Sarhan and Greenberg (1956), and
Pearson (1903).
2
Suppose p1 = 10, p2 = 12. Then in repeated sampling the expected values of the
6 subclass means would be
11.628 12.651
10.460 12.184
9.540
8.372
Applying BLUE and BLUP, ignoring selection, to these expected data the mixed model
equations are
40
p1
0 10 10 10 10
p2
525
0 0 0 s1
=
125 0 0 s2
25 0 s3
s4
25
400.00
7543.90
6441.78
1323.00
95.40
83.72
The solution is [10.000, 12.000, .651, .184, .184, .651], thereby demonstrating
and s. The reason for this is discussed in Section 13.5.1.
unbiasedness of p
In contrast the OLS solution gives biased estimators and predictors. Forcing
0 as in the BLUP solution we obtain as the solution
si =
o
0 s1 = 6325.50
500 500
.
o
1218.40
s2
100 0 100
A solution is [0, 12.651, 12.184]. Then so1 so2 = .467 = E(s1 s2 ) under the selection
model. This result is equivalent to a situation in which the observations on the first period
are not observable and we define selection at that stage as selection on u, in which case
treating u as fixed in the computations leads to unbiased estimators and predictors. Note,
however, that we obtain invariant solutions only for functions that are estimable under a
fixed u model. Consequently p2 is not estimable and we can predict only the difference
between s1 and s2 .
Pearson (1903) derived results for the multivariate normal distribution that are extremely useful for studying the selection problem. These are the results that were used in
3
the example in Section 13.2. We shall employ the notation of Henderson (1975a), similar
to that of Lawley (1943), rather than Pearsons, which was not in matrix notation. With
0
0
no selection [v1 v2 ] have a multivariate normal distribution with means,
1 2
v1
v2
, and V ar
C11 C12
0
C12 C22
(1)
Suppose now in conceptual repeated sampling v2 is selected in such a way that it has
mean = 2 + k and variance = Cs . Then Pearsons result is
v1
v2
Es
v1
v2
V ars
1 + C12 C1
22 k
2 + k
(2)
(3)
1
where C0 = C1
22 (C22 Cs )C22 . Henderson (1975) used this result to derive BLUP
and BLUE under a selection model with a multivariate normal distribution of (y, u, e)
assumed. Let w be some vector correlated with (y, u). With no selection
V ar
y
u
e
w
y
u
e
w
X
0
0
d
V
GZ0
R
B0
ZG
G
0
0
Bu
(4)
R
0
R
0
Be
B
Bu
Be
H
(5)
and
V = ZGZ0 + R, B = ZBu + Be .
Now suppose that in repeated sampling w is selected such that E(w) = s 6= d, and
V ar(w) = Hs . Then the conditional moments are as follows.
y
X + Bt
Bu t
E u =
,
w
s
(6)
where t = H1 (s d).
0
V BH0 B
ZG BH0 B0 BH1 Hs
y
0
V ar u = GZ0 BH0 B0 G Bu H0 Bu Bu H1 Hs ,
0
w
Hs H1 B0
Hs H1 Bu
Hs
where H0 = H1 (H Hs )H1 .
4
(7)
To find BLUE of K0 and BLUP of u under this conditional model, find linear
functions that minimize diagonals of V ar(K0 ) and variance of diagonals of (
u u)
subject to
E(K0 o ) = K0 and E(
u) = Bu t.
This is accomplished by modifying GLS and mixed model equations as follows.
X0 V1 X X0 V1 B
B0 V1 X B0 V1 B
o
to
X0 V1 y
B0 V1 y
(8)
BLUP of k0 + m0 u is
k0 o + m0 u to + m0 GZ0 (y X o Bto ).
Modified mixed model equations are
X0 R1 X X0 R1 Z
X0 R1 Be
0 1
0 1
1
Z0 R1 Be G1 Bu
ZR X ZR Z+G
0
0
0
0
0
Be R1 X Be R1 Z Bu G1 Be R1 Be + Bu G1 Bu
o
0
0
o
.
u = X0 R1 y Z0 R1 y Be R1 y
o
t
(9)
then
V ar(K0 o ) = K0 C11 K.
(10)
=
=
=
=
=
K0 C11 K.
K0 C12 .
C22 .
0
K0 C13 Bu .
0
0
0
G C22 + C23 Bu + Bu C23 Bu H0 Bu .
5
(11)
(12)
(13)
(14)
(15)
Note that (10), ..., (13) are analogous to the results for the no selection model, but (14)
and (15) are more complicated. The problems with the methods of this section are that
w may be difficult to define and the values of Bu and Be may not be known. Special cases
exist that simplify the problem. This is true particularly if selection is on a subvector
of y, and if estimators and predictors can be found that are invariant to the value of
associated with the selection functions.
Suppose that whatever selection has occurred has been a consequence of use of the
record vector or some subvector of y. Let the type of selection be described in terms of
a set of linear functions, say L0 y, such that
E(L0 y) = L0 X + t,
where t 6= 0. t would be 0 if there were no selection.
V ar(L0 y) = Hs .
Let us see how this relates to (9).
Bu = GZ0 L, Be = RL, H = L0 VL.
Substituting these values in (9) we obtain
X0 R1 y
o
X0 R1 X
X0 R1 Z
X0 L
0 1
0 1
0 1
1
= Z R y .
0 u
ZR X ZR Z+G
0
0
L0 y
LX
0
L VL
5.1
(16)
Selection with L0 X = 0
0
L = 010
110 110 0610
0
0
0
0
020
110 110 0600
where
1010 denotes a row vector of 10 one0 s.
00620 denotes a null row vector with 620 elements, etc.
It is easy to see that L0 X = 0, and that explains why we obtain unbiased estimators
and predictors from the solution to the mixed model equations.
Let us consider a much more general selection method that insures that L0 X = 0.
Suppose in the first cycle of selection that data to be used in selection comprise a subvector
of y, say ys . We know that Xs , consisting of such fixed effects as age, sex and season,
causes confusion in making selection decisions, so we adjust the data for some estimate of
Xs , say Xs o so the data for selection become ys Xs o . Suppose that we then evaluate
0
the ith candidate for selection by the function ai (ys Xs o ). There are c candidates for
selection and s of them are to be selected. Let us order the highest s of the selection
functions with labels 1 for the highest, 2 for the next highest, etc. Leave the lowest c s
unordered. Then the animals labelled 1, ..., s are selected, and there may, in addition,
be differential usage of them subsequently depending upon their rank. Now express these
selection criteria as a set of differences, of a0 (ys Xs o ),
1 2, 2 3, (s 1) s, s (s + 1), ..., s c.
Because Xs o is presumably a linear function of y these differences are a set of linear
functions of y, say L0 y. Now suppose o is computed in such a way that E(Xs o ) = Xs
in a no selection model. (It need not be an unbiased estimator under a selection model,
but if it is, that creates no problem). Then L0 X will be null, and the mixed model
equations ignoring selection yield BLUE and BLUP for the selection model. This result
and R
will result in biases
is correct if we know G and R to proportionality. Errors in G
and R
under a selection model, the magnitude of bias depending upon how seriously G
depart from G and R and upon the intensity of selection. The result also depends upon
normality. The consequences of departure from this distribution are not known in general,
but depend upon the form of the conditional means.
We can extend this description of selection for succeeding cycles of selection and
still have L0 X = 0. The results above depended upon the validity of the Pearson result
and normality. Now with continued selection we no longer have the multivariate normal
distribution, and consequently the Pearson result may not apply exactly. Nevertheless
with traits of relatively low heritability and with a new set of normally distributed errors
for each new set of records, the conditional distribution of Pearson may well be a suitable
approximation.
7
The previous section deals with strict truncation selection on a linear function of
records. This is not entirely realistic as there certainly are other factors that influence
the selection decisions, for example, death, infertility, undesirable traits not recorded
as a part of the data vector, y. It even may be the case that the breeder did have
available additional records and used them, but these were not available to the person or
organization attempting to estimate or predict. For these reasons, let us now consider a
different selection model, the functions used for making selection decision now being
0
ai (y X o ) + i
where i is a random variable not observable by the person performing estimation and
prediction, but may be known or partially known by the breeder. This leads to a definition
of w as follows
w = L0 y + .
Cov(y, w0 )
Cov(u, w0 )
Cov(e, w0 )
V ar(w)
=
=
=
=
B = VL + C, where C = Cov(y, 0 ).
Bu = GZ0 L + Cu , where Cu = Cov(u, 0 )
Be = RL + Ce , where Ce = Cov(e, 0 ).
L0 VL + L0 C + C0 L + C , where C = V ar().
(17)
(18)
(19)
(20)
Applying these results to (9) we obtain the modified mixed model equations below)
X0 R1 X
X0 R1 Z
X0 L + X0 R1 Ce
0 1
Z0 R1 Z + G1
ZR1 Ce G1 Cu
ZR X
0
0
0
1
1 0
1
0
L X + Ce R X Ce R Z + Cu G
X0 R1 y
o
0 1
= ZR y
,
u
0
0
1
L y + Ce R y
(21)
where = L0 VL + Ce R1 Ce + Cu G1 Cu + L0 C + C0 L.
Now if L0 X = 0 and if is uncorrelated with u and e, these equations reduce
to the regular mixed model equations that ignore selection. Thus the non-observable
variable used in selection causes no difficulty when it is uncorrelated with u and e. If
the correlations are non-zero, one needs the magnitudes of Ce , Cu to obtain BLUE and
BLUP. This could be most difficult to determine. The selection models of Sections 5 and
6 are described in Henderson (1982).
Selection On A Subvector Of y
X1
X2
e1
e2
V ar
Z1 u
Z2 u
R11 R12
R12 R22
e1
e2
Presumably y1 are data from earlier generations. Suppose that selection which has occurred can be described as
!
y1
0
0
L y = (M 0)
.
y2
Then the equations of (16) become
0
X0 R1 y
o
X0 R1 X
X0 R1 Z
X1 M
0 1
= Z0 R1 y
0
Z R X Z0 R1 Z + G1
u
M 0 y1
M0 X1
0
M0 V11 M
(22)
where
k = (M0 V11 M)1 t,
0
t being the deviation of mean of M0 y1 from X1 . If we solve for o and uo in the equations
(23) that regard u as fixed for purposes of computation, then
E[K0 o + T0 uo ] = K0 + E[T0 u | M0 y1 ]
provided that K0 + T0 u is estimable under a fixed u model.
0
1
X2 R1
22 X2 X2 R22 Z2
0
0
1
1
Z2 R22 X2 Z2 R22 Z2
o
uo
X2 R1
22 y2
0
1
Z2 R22 y2
(23)
This of course does not prove that K0 o + T0 uo is BLUP of this function under M0 y1
selection and utilizing only y2 . Let us examine modified mixed model equations regarding
y2 as the data vector and M0 y1 = w. We set up equations like (21).
0
X2 R1
X2 R1
0
o
X2 R1
22 X2
22 Z2
22 y2
0
0
o
0 1
0 1
1
1
Z1 M
u = Z2 R22 y2 .
Z2 R22 X2 Z2 R22 Z2 + G
0
0
0
M0 Z1
M0 Z1 GZ1 M
(24)
A sufficient set of conditions for the solution to o and uo in these equations being equal
to those of (23) is that M0 = I and Z1 be non-singular. In that case if we absorb we
obtain the equations of (23).
Now it seems implausible that Z1 be non-singular. In fact, it would usually have
1 be the mean
more rows than columns. A more realistic situation is the following. Let y
1 vector. Then the model for y
1 is
of smallest subclasses in the y
1 + Z
1 u + e1 .
1 = X
y
See Section 1.6 for a description of such models. Now suppose selection can be described
as I
y1 . Then
Be = 0 if R12 = 0, and
1.
Bu = Z
Then a sufficient condition for GLS using y2 only and computing as though u is fixed
1
to be BLUP under the selection model and regarding y2 as that data vector is that Z
be non-singular. This might well be the case in some practical situations. This is the
selection model in our sire example.
Selection On u
Cases exist in animal breeding in which the data represent observations associated
with u that have been subject to prior selection, but with the data that were used for such
selection not available. Henderson (1975a) described this as L0 u selection. If no selection
on the observable y vector has been effected, BLUE and BLUP come from solution to
equations (25).
X0 R1 X
X0 R1 Z
0
o
X0 R1 y
0 1
L
Z R X Z0 R1 Z + G1
uo = Z0 R1 y
0
L0
L0 GL
10
(25)
o
uo
X0 R1 y
Z0 R1 y
(26)
o
uo
X0 R1 y
Z0 R1 y
(27)
A sufficient condition for this to be BLUP is that L = I. The proof comes by substituting
I for L in (26). In sire evaluation L0 u selection can be accounted for by proper grouping.
Henderson (1973) gave an example of this for unrelated sires. Quaas and Pollak (1981)
extended this result for related sires. Let G = As2 . Write the model for progeny as
y = Xh + ZQg + ZS + e,
where h refers to fixed herd-year-season and g to fixed group effects. Then it was shown
that such grouping is equivalent to no grouping, defining L = G1 Q, and then using (25).
We illustrate this method with the following data.
group sire ni
1
1
2
2
3
3
1
2
4
2
5
3
3
6
1
7
2
8
1
9
2
1 0 .5 .5
1 0 0
1 .25
A =
0
.5
0
0
1
.25
0
.5
.125
0
1
11
yi
10
12
7
6
8
3
5
2
8
.25
0
.125
.5
0
.0625
1
0
.25
0
0
.5
0
0
1
.125
0
.25
.0625
0
.5
.03125
0
1
Q0
1 1 1 0 0 0 0 0 0
= 0 0 0 1 1 0 0 0 0 .
0 0 0 0 0 1 1 1 1
This gives
L0
12 16 12 8 8 8
0
0 0
1
0 20 20
0 8 8 0
= 8 8
= G Q,
0
0 8 8 8 12 16 16 8
and
40 16 8
40 16
L0 GL =
.
52
Then the equations like (25) give a solution
o = 2.9014,
so = (2.0597, 1.7714, 2.1581, 0, .1689, .1905, .0108, .0739, .2269),
= (1.9651, .0339, .0453).
The sire evaluation is o + soi and this is the same as when groups were included.
In some applications the base population animals are not a random sample from some
population, but rather have been selected. Consequently the additive genetic variancecovariance matrix for these animals is not a2 I, where a2 is the additive genetic variance in
2
2
the population from which these animals were were taken. Rather it is As a
, where a
2
6= a in general. If the base population had been a random sample from some population,
the entire A matrix would be
I
A12
0
A12 A22
12
(28)
The inverse of this can be found easily by the method described by Henderson (1976).
Denote this by
C11 C12
0
C12 C22
(29)
If the Pearson result holds, the A matrix for this conditional population is
!
As
As A12
0
0
A12 As A22 A12 (I As )A12
(30)
(31)
0
where Cs = A1
s C12 A12 ,
(32)
1.0 0
1.0
unconditional A =
.2
.1
1.1
.3
.2
.3
1.2
.1
.2
.5
.2
1.3
.7 .4
.4
.8
.7 .4 .1
.13
.8 0
.04
1.07
.25
1.117
.01
.12
.47
.151
1.273
1.103168
13
.003730
.143513
.411712
.031349
.945770
1.812405
.003286 .170155
1.172793 .191114
.977683
A1
=
s
2.0 1.0
1.0 1.75
.2 .1
= .3 .2 ,
.1 .2
and
A1
s
, C12 =
A12
.003730
.143513
.411712
.031349
.954770
C12 A12 =
2.103168 1.064741
1.812405
10
In all previous discussions of prediction in both the no selection and the selection
model we have used as our criteria linear and unbiased with minimum variance of the
prediction error. That is, we use a0 y as the predictor of k0 + m0 u and find a that
minimizes E(a0 y k0 m0 u)2 subject to the restriction that E(a0 y) = k0 + E(m0 u).
This is a logical criterion for making selection decisions. For other purposes such as
estimating genetic trend one might wish to minimize the variance of the predictor rather
than the variance of the prediction error. Consequently in this section we shall derive a
predictor of k0 + m0 u, say a0 y, such that E(a0 y) = k0 + E(m0 u) and has minimum
variance. For this purpose we use the L0 y type of selection described in Section 5. Let
E(L0 y) = L0 X + t, t6=0.
V ar(L0 y) = Hs 6=L0 VL.
Then
E(y | L0 y) = X + VL(L0 VL)1 t X + VLd.
E(u | L0 y) = GZ0 L(L0 VL)1 t GZ0 Ld.
V ar(y | L0 y) = V VL(L0 VL)1 (L0 V Hs )(L0 VL)1 L0 V Vs .
Then we minimize V ar(a0 y) subject to E(a0 y) = k0 + m0 GZ0 Ld. For this expectation
to be true it is required that
X0 a = k and L0 Va = L0 ZGm.
14
Vs X VL
a
0
0 0
k
X0
=
0
0
LV 0 0
L ZGm
(33)
(34)
V
X VL
a
0
0
X
0
0
k
=
.
0
0
LV 0 0
L ZGm
(35)
= Z0 R1 y
0 u
Z R X Z0 R1 Z + G1
.
0
0
o
0
0
L VL
t
Ly
(36)
Thus o is a GLS solution ignoring selection, and to = (L0 VL)1 L0 y. It was proved in
Henderson (1975a) that
V ar(K0 o ) = K0 (X0 V1 X) K = K0 C11 K,
Cov(K0 o , t0 ) = 0, and
V ar(t) = (L0 VL)1 Hs (L0 VL)1 .
, is
Thus the variance of the predictor, K0 o + m0 u
K0 C11 K + M0 GZ0 L(L0 VL)1 Hs (L0 VL)1 L0 ZGM.
15
(37)
2
y24
y25
y26
The model is
yij = ti + aij + eij .
1 0 0 .5 .5
1 0 0 0
1 0 0
V ar(a) =
1 .25
0
.5
0
0
0
1
This implies that animal 1 is a parent of animals 4 and 5, and animal 2 is a parent of
animal 6. Let V ar(e) = 2I6 . Thus h2 = 1/3. We assume that animal 1 was chosen to have
2 progeny because y11 > y12 . Animal 2 was chosen to have 1 progeny and animal 3 none
because y12 > y13 . An L matrix describing this type of selection and resulting in L0 X = 0
is
!
1 1
0 0 0 0
.
0
1 1 0 0 0
Suppose we want to predict
31 (1 1 1 1 1 1) u.
This would be an estimate of the genetic trend in one generation. The mixed model
16
1.5
0
1.5
.5
0
2.1667
.5
0
0
1.8333
.5
0
0
0
0
0
0
.5
.5
.5
0
0
0 .6667 .6667
0
0
0
0
0
0
.6667 0
0
1.5
0
0
0
0
0
1.8333
0
0
0
0
1.8333
0
0
0
1.8333 0
0
6 3
6
0
1
E(ys ) =
18
12
6
6
6
6 12
2
1
2
1
1
1
d1
d2
17
1
1
1
0
0
0
0
0
0
1
1
1
t1
t2
and
1
E(us ) =
18
4
2
2
2
2 4
2
1
2
1
1
1
d1
d2
It is easy to verify that all of the predictors described have this same expectation. If
t = were known, a particularly simple unbiased predictor is
31 (1 1 1 1 1 1) (y X).
But the variance of this predictor is very much larger than the others. The variance is
1.7222 when Hs = L0 VL.
18
Chapter 14
Restricted Best Linear Prediction
C. R. Henderson
1984 - Guelph
Kempthorne and Nordskog (1959) derived restricted selection index. The model and
design assumed was that the record on the j th trait for the ith animal is
0
G0 m
0
(1)
This is a nice result but it depends upon knowing and having unrelated animals and
the same information on each candidate for selection. An extension of this to related
animals, to unequal information, and to more general designs including progeny and sib
tests is presented in the next section.
Restricted BLUP
We now return to the general mixed model
y = X + Zu + e,
0 1
1
ZR Z+G
Z0 R1 ZGC
ZR X
0 1
0 1
0 1
0
0
0
C GZ R X C GZ R Z
C GZ R ZGC
X0 R1 y
o
0 1
= ZR y
.
u
0 1
0
C GZ R y
(2)
Application
Quaas and Henderson (1977) presented computing algorithms for restricted BLUP
in an additively genetic model and with observations on a set of correlated animals. The
algorithms permit missing data on some or all observations of animals to be evaluated.
Two different algorithms are presented, namely records ordered traits within animals and
records ordered animals within traits. They found that in this model absorption of
results in a set of equations with rank less than r + q, the rank of regular mixed model
equations, where r = rank (X) and q = number of elements in u. The linear dependencies
is unique, but care needs to
relate to the coefficients of but not of u. Consequently u
o
0
0
be exercised in solving for and in writing K + m u, for K0 must now be estimable
under the augmented mixed model equations.
Chapter 15
Sampling from finite populations
C. R. Henderson
1984 - Guelph
Finite e
The populations from which samples have been drawn have been regarded as infinite
in preceding chapters. Thus if a random sample of n is drawn from such a population
with variance 2 , the variance-covariance matrix of the sample vector is In 2 . Suppose in
contrast, the population has only t elements and a random sample of n is drawn. Then
the variance-covariance matrix of the sample is
1/(t 1)
1
...
1/(t 1)
2.
(1)
If t = n, that is, the sample is the entire population, the variance-covariance matrix is
singular. As an example, suppose that the population of observations on a fixed animal
is a single observation on each day of the week. Then the model is
yi = + e i .
V ar(ei ) =
(2)
1
1/6 1/6
1/6
1
1/6
2
..
..
..
.
.
.
.
1/6 1/6
1
= y,
and
V ar(
) =
7n 2
,
6n
V ar (
) =
1
(3)
which goes to 2 /n when t goes to infinity, the latter being the usual result for a sample
of n from an infinite population with V ar = I 2 .
Suppose now that in this same problem we have a random sample of 3 unrelated
animals with 2 observations on each and wish to estimate and to predict a when the
model is
yij = + ai + eij ,
V ar(a) = I3 ,
V ar(e) =
1
1/6
0
0
0
0
1/6
1
0
0
0
0
0
0
1
1/6
0
0
0
0
1/6
1
0
0
0
0
0
0
1
1/6
0
0
0
0
1/6
1
Then
R1 =
6
1
0
0
0
0
1
6
0
0
0
0
0
0
0
0
1
6
0
0
6
1
0
0
0
0
1
6
0
0
0
0
0
0
6
1
a1
a2
a3
= .2
/35.
1.2 .4 .4 .4
.4 1.4 0
0
.4 0 1.4 0
.4 0
0 1.4
y..
y1 .
y2 .
y3 .
Finite u
We could also have a finite number of breeding values from which a sample is drawn.
If these are unrelated and are drawn at random from a population with t animals
1/t
1
..
V ar(a) =
1/t
a2 .
(4)
If q are chosen not at random, we can either regard the resulting elements of a as fixed
or we may choose to say we have a sample representing the entire population. Then
V ar(a) =
1/q
1
...
1/q
1
2
2
a
,
(5)
2
probably is smaller than a2 . Now G is singular, and we need to compute BLUP
where a
by the methods of Section 5.10. We would obtain exactly the same results if we assume
a fixed but with levels that are unpatterned, and we then proceed to biased estimation
as in Chapter 9, regarding the average values of squares and products of elements of a as
...
P =
1/q
1
1/q
2
a
.
(6)
Much controversy has surrounded the problem of an appropriate model for the interactions in a 2 way mixed model. One commonly assumed model is that the interactions
have V ar = I 2 . An alternative model is that the interactions in a row (rows being
random and columns fixed) sum to zero. Then variance of interactions, ordered columns
in rows, is
B 0 0
0 B 0
2
.. ..
..
. .
.
0 0
B
(7)
where B is c c with 1s on the diagonal and 1/(c 1) on all off- diagonals, where c =
number of columns. We will show in Chapter 17 how with appropriate adjustment of r2
(= variance of rows) we can make them equivalent models. See Section 1.5 for definition
of equivalence of models.
Suppose that we have a finite population of r rows and c columns. Then we might assume that the variance-covariance matrix of interactions is the following matrix multiplied
by 2 .
All diagonals = 1.
Covariance between interactions in the same row = 2 /(c 1).
Covariance between interactions in the same column = 2 /(r 1).
Covariance between interations in neither the same row nor column =
2 /(r 1)(c 1).
3
If the sample involves r rows and c columns both regarded as fixed, and there is no assumed
pattern of values of interactions, estimation biased by interactions can be accomplished by
regarding these as pseudo-random variables and using the above variances for elements
of P, the average value of squares and products of interactions. This methodology was
described in Chapter 9.
In previous chapters dealing with infinite populations from which u is drawn at random as well as infinite subpopulations from which subvectors ui are drawn the assumption
has been that the expectations of these vectors is null. In the case of a population with
finite levels we shall assume that the sum of all elements of their population = 0. This results in a variance- covariance matrix with rank t1, where t = the number of elements
in the population. This is because every row (and column) of the variance-covariance matrix sums to 0. If the members of a finite population are mutually unrelated (for example,
a set of unrelated sires), the variance-covariance matrix usually has d for diagonal elements and d/(t 1) for all off-diagonal elements. If the population refers to additive
genetic values of a finite set of related animals, the variance-covariance matrix would be
Aa2 , but with every row (and column) of A summing to 0 and a2 having some value
different from the infinite model value.
With respect to a factorial design with 2 factors with random and finite levels the
following relationship exists. Let ij represent the interaction variables. Then
q1
X
ij = 0 for all j = 1, . . . , q2 ,
i=1
and
q2
X
ij = 0 for all i = 1, . . . , q1 ,
(8)
j=1
where q1 and q2 are the numbers of levels of the first and second factors in the two
populations.
Similarly for 3 factor interactions, ijk ,
q3
X
k=1
q2
X
j=1
q1
X
i=1
and
(9)
This concept can be extended to any number of factors. The same principles regarding
interactions can be applied to nesting factors if we visualize nesting as being a factorial
design with planned disconnectedness. For example, let the first factor be sires and the
second dams with 2 sires and 5 dams in the experiment. In terms of a factorial design
the subclass numbers (numbers per litter, eg.) are
Sires
1
2
1
5
0
Dams
3
8
0
2
9
0
4
0
7
5
0
10
Covariance Matrices
Consider the model
y = X +
Zi ui + possible interactions + e.
(10)
The ui represent main effects. The ith factor has ti levels in the population. Under the traditional mixed model for variance components all ti infinity. In that case V ar(ui ) = Ii2
for all i, and all interactions have variance-covariance that are I times a scalar. Further,
all subvectors of ui and those subvectors for interactions are mutually uncorrelated.
Now with possible finite ti
1/(ti 1)
1
...
V ar(ui ) =
1/(ti 1)
i2 .
(11)
This notation denotes ones for diagonals and all off-diagonal elements = 1/(ti 1).
Now denote by gh the interactions between levels of ug and uh . Then there are tg th
interactions in the population and the variance-covariance matrix has the following form,
where i denotes the level of the g th factor and j the level of the hth factor. The diagonals
are V ar(gh ).
All elements ij with ij 0 = V ar(gh )/(th 1).
All elements ij with i0 j = V ar(gh )/(tg 1).
All elements ij with i0 j 0 = V ar(gh )/(tg 1)(th 1).
5
(12)
11
12
13
21
22
23
1
1/2 1/2 1
1/2
1
1/2 1/2
1
2
=
gh
1
1/2
1/2
1
1/2
1
Suppose that tg infinity. Then the four types of elements of the variance-covariance
matrix would be
[1, 1/(th 1), 0, 0] V ar(gh ).
This is a model sometimes used for interactions in the two way mixed model with levels
of columns fixed.
Now consider 3 factor interactions, f gh . Denote by i, j, k the levels of uf , ug , and
uh , respectively. The elements of the variance-covariance matrix except for the scalar,
V ar(f gh ) are as follows.
all diagonals
ijk with ijk 0
ijk with ij 0 k
ijk with i0 jk
ijk with ij 0 k 0
ijk with i0 jk 0
ijk with i0 j 0 k
ijk with i0 j 0 k 0
=
=
=
=
=
=
=
=
1.
1/(th 1).
1/(tg 1).
1/(tf 1).
1/(tg 1)(th 1)
1/(tf 1)(th 1)
1/(tf 1)(tg 1)
1/(tf 1)(tg 1)(th 1)
(13)
To illustrate, a mixed model with ug , uh fixed and tf infinity, the above become 1,
1/(th 1), 1/(tg 1), 0, k 1/(tg 1)(th 1), 0, 0, 0. If levels of all factors infinity,
the variance-covariance matrix is IV ar(f gh ).
Finally let us look at 4 factor interactions ef gh with levels of ue , uf , ug , uh denoted
by i, j, k, m, respectively. Except for the scalar V ar(ef gh ) the variance-covariance matrix
has elements like the following.
all diagonals = 1.
ijkm with ijkm0 = 1/(th 1), and
ijkm with ijk 0 m = 1/(tg 1), and
etc.
6
(14)
Note that for all interactions the numerator is 1, the denominator is the product of the
t 1 for subscripts differing, and the sign is plus if the number of differing subscripts is
even, and negative if the number of differing subscripts is odd. This set of rules applies
to any interactions among any number of factors.
4 1 1 1
4 1 1
4 1
G =
1
1
1
1
4
Then the mixed model coefficient matrix (not including s3 ,s4 ,s5 ) is
15 12
1
20
30
with inverse
3
2
11
36 21 6
1
26
1
.
9
26
6
3
1 =
2 2
s
9
2
2
s2
y1
y2
.5
1.5
0
.5
.5
.5
.4
2.6
.4
.4
.4
.4
.1
.1
1.4
.1
.1
.1
0
0
0
1.
0
0
0
0
0
0
1.
0
0
0
0
0
0
1.
.4
1.6
.4
.4
.4
.4
.1
.1
.4
.1
.1
.1
26
1 9 9 9
26
9
9
9
36 9 9
36 9
36
91
y1.
y2.
The upper 3 3 submatrix is the same as the inverse when only sires 1 and 2 are included.
The solution is
6
3
2 2
!
!
y1.
2
1 2
.
= 9
0
s
y2.
0
0
0
0
0
s3 , s4 , s5 = 0
as would be expected because these sires are unrelated to the 2 with progeny relative to
the population of 5 sires. The solution to
, s1 , s2 are the same as before. The prediction
P
error variance of + .2
si is
(1 .2 .2 .2 .2 .2) (Inverse matrix) (1 .2 .2 .2 .2 .2)0 = 4,
the value of the upper diagonal element of the inverse. By the same reasoning we find
P
that sj is BLUP of sj .2 5i=1 si and not of si .5 (s1 + s2 ) for i=1,2. Using the former
function with the inverse of the matrix of the second set of equations we obtain for s1
the value, 2.889. This is also the value of the corresponding diagonal. In contrast the
variance of the error of predition of s1 .5 (s1 + s2 ) is 1.389. Thus sj is the BLUP of
P
sj .2 5i=1 si .
The following rules insure that one does not attempt to predict K0 + M0 u that is
not predictable.
1. K0 must be estimable in a model in which E(y) = X.
2. Pretend that there are no missing classes or subclasses involving all levels of ui in the
population.
3. Then if K0 + M0 u is estimable in such a design with u regarded as fixed, K0 + M0 u
is predictable.
Use the rules of Chapter 2 in checking estimability.
For an example suppose we have sire treatment design with 3 treatments and 2
sires regarded as a random sample from an infinite population of possibly related sires.
Let the model be
yijk = + si + tj + ij + eijk .
, tj are fixed
V ar(s) = Is2 .
9
B 0 0
0 B 0
..
..
..
.
.
.
0 0 B
where
1
1/2 1/2
2
1
1/2
B =
1/2
.
1/2 1/2
1
Suppose we have progeny on all 6 sire treatment combinations except (2,3). This
creates no problem in prediction due to rule 1 above. Now we can predict for example
t1 +
ci (si + i1 ) t2
i=1
di (si + i2 )
i=1
where
X
i
ci =
di = 1.
That is, we can predict the difference between treatments 1 and 2 averaged over any sires
in the population, including some not in the sample of 2 sires if we choose to do so. In
fact, as we shall see, BLUE of (t1 t2 ) is BLUP of treatment 1 averaged equally over all
sires in the population minus treatment 2 averaged equally over all sires in the population.
Suppose we want to predict the merit of sire 1 versus sire 2. By the rules above,
(s1 s2 ) is not predictable, but
s1 +
3
X
cj (tj + ij ) s2
j=1
3
X
dj (tj + ij )
j=1
Calculation of BLUE and BLUP when there are finite levels of random factors must
take into account the fact that there may be singular G. Consider the simple one way
10
1. .2 .6
.
A =
1
.5
1.6
Suppose we have progeny numbers on these sires that are 9, 5, 3, 0. Suppose the model
is
yijk = + si + eij .
V ar(s) = As2 .
V ar(e) = Ie2 .
Then if we wish to include all 4 sires in the mixed model equations we must resort to the
methods of Sect. 5.10 since G is singular. One of those methods is to solve
2
As
1
0
s1
.
.
s4
17
9
5
3
0
9
9
0
0
0
5
0
5
0
0
3
0
0
3
0
0
0
0
0
0
2
e +
1 00
0 As2
y..
y1.
y2.
y3.
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
/e2 .
is BLUP of +
1 X
si .
4 i
sj is BLUP of sj
1 X
si .
4 i
(15)
17
9
5
3
9
9
0
0
5
0
5
0
3
0
0
3
e2
0
0
0
0
.2
.3
11
0
.2
1
.2
0
1
.3.
.2
2
s
s1
s2
s3
y..
y1.
y2.
y3.
e2 .
(16)
P
P
predicts + 41 4i=1 si , and sj predicts sj 14 4i=1 si . s4 can be computed by
1
1 .2 .3
[.5 .6 .5] .2 1 .2
.3 .2 1
s1
2
s
.
s3
1
2
3
2
8
2
0
1
0
0
1 1
0
0
0
0
0
0
0
0
1 1
1
/2.
If we do not include 11 and 32 in the solution the only submatrix of G that is singular
is the 2x2 block pertaining to 21 , 22 . The GLS equations regarding u as fixed are
1
10
0 0
11 0
6
0
9
6
15
8
2
0
0
10
8
0
0
0
8
8
0
9
0
9
0
0
9
0
2
0
0
2
0
0
2
0
0
6
6
0
0
0
0
6
12
s1
2
s
3
t
1
t2
12
21
22
31
y1..
y2..
y3..
y.1.
y.2.
y12.
y21.
y22.
y31.
1
10
(17)
1 1
1
1
/2
and add to the diagonal coefficients, (.5, .5, .5, 0, 0, 2, 1, 1, 2). The solution to the
resulting equations is BLUP. If we had included 12 and 32 , we would premultiply the
last 6 GLS equations (equations for ) by Var () and then add to the diagonals, (.5,
.5, .5, 0, 0, 1, 1, 1, 1, 1, 1). When all elements of a population are included in a BLUP
solution, an interesting property becomes apparent. The same summing to 0s occurs
in the BLUP solution as is true in the corresponding elements of the finite populations
described in Section 4.
i = 1, 2, 3.
2 1 1
2 1
V ar(a) = 1
,
1 1
2
V ar(e) = 10I.
ni = (5, 3, 2), yi. = (10, 8, 6).
Using singular G the nonsymmetric mixed model equations are
1.
.5
.1
.4
.5
.3
.2
2.
.3 .2
.5 1.6 .2
.5 .3 1.4
a
1
a
2
a
3
ui = 0.
13
2.4
.6
0
.6
(18)
We can obtain the same solution by pretending that V ar(a) = 3I. Then the mixed
model equations are
1
.5
.3
.2
.5 .5 + 31
0
0
1
.3
0
.3 + 3
0
.2
0
0
.2 + 31
a
1
a
2
a
3
2.4
1.0
.8
.6
(19)
The inverse of (19) does not yield prediction error variances. To obtain prediction error
variances of + a
. and of ai a
. pre-multiply it by
1
3
3
1
1
1
0
2 1 1
0 1
2 1
0 1 1
2
and post-multiply that product by the transpose of this matrix. This is a consequence of
the fact that the solution to (19) is BLUP of
1
3
3
1
1
1
0
2 1 1 s1
0 1
2 1 s2
0 1 1
2
s3
In most cases use of diagonal G does not result in the same solution as using the true G,
and the inverse never yields directly the prediction error variance-covariance matrix.
Rules for deriving diagonal submatrices of G to use in place of singular submatrices
follow. For main effects say of ui with ti levels substitute for the G submatrix described
2
in Section 6, Ii
, where
2
i
=
ti
ti1
i2
X
j
X
j,k,m
X
ti
ti
2
ij2 +
ijk
(ti1 )(tj1 )
j,k (ti1 )(tj1 )(tk1 )
ti
2
etc.
(ti1 )(tj1 )(tk1 )(tm1 ) ijkm
(20)
i2 refers to the scalar part of the variance of the ith factor, ij2 refers to 2 factor interactions
2
involving ui , ijk
refers to 3 factor interactions involving ui , etc. Note that the signs
alternate
X
ti tj
ti tj
2
2
ij
=
ij2
ijk
(ti1 )(tj1 )
(ti1 )(tj1 )(tk1 )
k
14
X
k,m
2
ijk
=
ti tj
2
etc.
(ti1 )(tj1 )(tk1 )(tm1 ) ijkm
(21)
X
ti tj tk
ti tj tk
2
ijk
(22)
Higher order interactions for 2 follow this same pattern with alternating signs. The
sign is positive when the number of factors in the denominator minus the number in the
numerator is even.
2
It appears superficially that one needs to estimate the different i2 , ij2 , ijk
, etc., and
this is difficult because non-diagonal, singular submatrices of G are involved. But if one
plans to use their diagonal representations, one might as well estimate the 2 directly by
any of the standard procedures for the conventional mixed model for variance components
estimation. Then if for pedagogical or other reasons one wishes estimates of 2 rather
than 2 , one can use equations (20), (21), (22) that relate the two to affect the required
linear transformation.
The solution using diagonal G should not be assumed to be the same as would have
been obtained from use of the true G matrix. If we consider predictable functions as
defined in Section 7 and take these same functions of the solution using diagonal G we
do obtain BLUP. Similarly using these functions we can derive prediction error variances
using a g-inverse of the coefficient matrix with diagonal G.
10
Biased Estimation
If we can legitimately assume that there is no expected pattern of values of the levels
of a fixed factor and no expected pattern of values of interactions between levels of fixed
factors, we can pretend that these fixed factors and interactions are populations with finite
levels and proceed to compute biased estimators as though we are computing BLUP of
random variables. Instead of prediction error variance as derived from the g-inverse of
the coefficient matrix we obtain estimated mean squared errors.
15
Chapter 16
The One-Way Classification
C. R. Henderson
1984 - Guelph
This and subsequent chapters will illustrate principles of Chapter 1-15 as applied
to specific designs and classification of data. This chapter is concerned with a model,
yij = + ai + eij .
(1)
Thus data can be classified with ni observations on the ith class and with the total of
observations in that class = yi. . Now (1) is not really a model until we specify what population or populations were sampled and what are the properties of these populations. One
possibility is that in conceptual repeated sampling and ai always have the same values,
and the eij are random samples from an infinite population of uncorrelated variables with
mean 0, and common variance, e2 . That is, the variance of the population of e is Ie2 ,
and the sample vector of n elements has expectation null and variance = Ie2 . Note that
Var(eij ) is assumed equal to Var(ei0 j ), i 6= i0 .
Estimation and tests of hypothesis are simple under this model. The mixed model
equations are OLS equations since Zu does not exist and since V ar(e) = Ie2 . They are
1
e2
n.
n1
n2
..
.
n1
n1
0
..
.
n2 . . .
o
o
0 . . . a1
o
n2 . . .
a2
..
..
.
.
y..
y1.
y2.
..
.
1
.
e2
(2)
The X matrix has t + 1 columns, where t = the number of levels of a, but the rank is t.
None of the elements of the model is estimable. We can estimate
+
t
X
ki ai ,
i=1
where
X
ki = 1,
or
t
X
k i ai ,
if
X
ki = 0.
t
X
ki ai ,
i=2
with
t
X
ki = 1,
i=2
0 0
1
0 n1
0 0
.. ..
. .
0
...
0
...
n1
.
.
.
2
..
.
3 0
e
4
o
ao1
ao2
ao3
78
49
16
13
1
.
e2
(3)
A solution is (0, 49/8, 16/3, 13/4). The corresponding g-inverse of the coefficient matrix
is
0 0
0
0
81 0
0
31 0 e
41
Suppose one wishes to estimate a1 a2 , a1 a3 , a2 a3 . Then from the above solution
these would be 49
16
, 49
13
, 16
13
. The variance-covariance matrix of these estimators
8
3
8
4
3
4
is
0 1 1
0
0 1
0 1
0 0
1 1
0
0
0
0
0
81
0
0
0
0
31
0
0
0
0
41
0
0
0
1
1
0
1
0
1
0 1 1
e2 .
(4)
e2 = (y0 y
= (468 427.708)/12
= 3.36.
Then we can substitute this for e2 to obtain estimated sampling variances.
Suppose we want to test the hypothesis that the levels of ai are equal. This can be
expressed as a test that
0 1 1
0
0 1
0 1
a1
a2
a3
V ar(K0 o ) = K0 (g inverse)K =
with
inverse =
2.4 .8
.8 2.9333
0
0
.45833 .125
.125 .375
!
e2
1
.
e2
K0 o = (.79167 2.875)0 .
Then
numerator SS = (.79167 2.875)
2.4 .8
2.9333
.79167
2.875
= 22.108.
The same numerator can be computed from
X yi.2
i
ni
y..2
= 427.708 405.6 = 22.108.
n.
22.108/2
3.36
(5)
where xi = (1,2,3,4,5). With Var(e) = I 2 the OLS equations under the full model are
19
64 270
270 1240
5886
1240
5886
28, 384
138, 150
5886
28, 384
138, 150
676, 600
3, 328, 686
1
2
3
4
61
230
1018
4784
23, 038
(6)
Biased Estimation of + ai
Now we consider biased estimation under the assumption that values of a are unpatterned. Using the same data as in the previous section we assume for purposes of
illustration that V ar(e) = 65 I, and that the average values of squares and products of the
deviations of a from a are
4 1 1 1
4 1 1
1
4 1
8
1
1
1
1
4
(7)
Then the equations for minimum mean squared error estimation are
22.8
.9
1.35
2.1
.6
3.15
6.0
4.0
.75
.75
.75
.75
2.4
.3
2.2
.3
.3
.3
1.2
.15
.15
1.6
.15
.15
3.6
.45
.45
.45
2.8
.45
9.6
1.2
1.2
1.2
1.2
5.8
o
( )
The solution is (3.072, -.847, .257, -.031, -.281, .902). Note that
X
a
i = 0.
3
-.816
.288
4
-.566
.538
.250
5
-1.749
-.645
-.933
-1.183
73.2
1.65
3.9
6.9
3.15
15.6
(8)
3
-1.0
.5
4
5
-.667 -2.125
.833 -.625
.333 -1.125
-1.458
1
0
0
0
0
0
0
4
1
1
1
1
0
1
4
1
1
1
0
1
1
4
1
1
0
1
1
1
4
1
0
1
1
1
1
4
These are
2
3
4
5
1 .388 .513 .326 .222
2
.613 .444 .352
3
.562 .480
4
.287
The corresponding values for BLUE are
2
3
4
1 .7 1.2 .533
2
1.5 .833
3
1.333
4
5
.325
.625
1.125
.458
If the priors used are really correct, the MSE for biased estimators of differences are
considerably smaller than BLUE.
The same biased estimators can be obtained by use of a diagonal P, namely .625I,
where
5
.625 =
(.5).
51
This gives the same solution vector, but the inverse elements are different. However, mean
squared errors of estimable functions such as the a
i a
j and
+ a yield the same results
when applied to the inverse.
6
(9)
where xi = (1,2,3,4,5). Suppose that the levels of a are assumed to have no pattern and
we use a prior value on their squares and products =
5
6
.05
.2
...
.05
22.8
76.8
6.
2.4
1.2
3.6
76.8
324
6
4.8
3.6 14.4
.36 2.34 2.2 .12 .06 .18
.54 2.64 .3 1.48 .06 .18
.84 2.94 .3 .12 1.24 .18
.24 .24 .3 .12 .06 1.72
1.26
8.16 .3 .12 .06 .18
9.6
48
.48
.48
.48
.48
2.92
a
1
a
2
a
3
a
4
a
5
73.2
276
.66
1.56
2.76
1.26
6.24
(10)
The solution is [1.841, .400, -.145, .322, -.010, -.367, .200]. Note that
X
a
i = 0.
We need to observe precautions in interpreting the solution. is not estimable and neither
is + ai nor ai ai0 .
We can only estimate treatment means associated with the particular level of xi in the
experiment. Thus we can estimate + ai + xi where xi = 1,2,3,4,5 for the 5 treatments
respectively. The biased estimates of treatment means are
1. 1.841 + .400 - .145 = 2.096
2. 1.841 + .800 + .322 = 2.963
3. 1.841 + 1.200 - .010 = 3.031
4. 1.841 + 1.600 - .367 = 3.074
5. 1.841 + 2.000 + .200 = 4.041
The corresponding BLUE are the treatment means, (2.0, 3.5, 3.0, 2.667, 4.125).
If the true ratio of squares and products of ai to e2 are as assumed above, the biased
for the biased
estimators have minimum mean squared error. Note that E(
+a
i + xi )
estimator is + xi + some function of a (not equal to ai ). The BLUE estimator has, of
course, expectation, + xi + ai , that is, it is unbiased.
7
If, in contrast to xi being constant for every observation on the ith treatment as in
Section 4, we have the more traditional covariate model,
yij = + xij + ai + eij ,
(11)
we can then estimate + ai unbiasedly as well as ai ai0 . Again, however, if we think the
ai are unpatterned and we have some good prior value of their products, we can obtain
smaller mean squared errors by using the biased method.
Now we need to consider the meaning of an estimator of + ai . This really is an
estimator of treatment mean in hypothetical repeated sampling in which xi. = 0. What if
the range of the xij is 5 to 21 in the sample? Can we infer from this that the the response
to levels of x is that same linear function for a range of xij as low as 0? Strictly speaking
we can draw inferences only for the values of x in the experiment. With this in mind
we should really estimate + ai + k, where k is some value in the range of xs in the
experiment. With regard to treatment differences, ai ai0 , can be regarded as an estimate
of ( + ai + k) ( + ai0 + k), where k is in the range of the xs of the experiment.
Nonhomogenous Regressions
A still different covariate model is
yij = + i xij + ai + eij .
Note that in this model is different from treatment to treatment. According to the rules
for estimability + ai , ai ai0 , and i are all estimable. However, it is now obvious that
ai ai0 has no practical meaning as an estimate of treatment difference. We must specify
what levels of x we assume to be present for each treatment. In terms of a treatment
mean these are
+ ai + ki i
and
+ aj + kj j
and the difference is
ai + ki i aj kj j .
Suppose ki = kj = k. Then the treatment difference is
ai aj + k(i j ),
and this is not invariant to the choice of k when i 6= j . In contrast when all i = , the
treatment difference is invariant to the choice of k.
Let us illustrate with two treatments.
8
Treatment
1
2
ni
8
5
yi. xi.
38 36
43 25
0
25
0
135
x2ij
220
135
xij yij
219
208
36
0
220
+ t1
+ t2
1
2
38
43
219
208
The solution is (1.0259, 12.1, .8276, -.7). Then the estimated difference, treatment 1
minus treatment 2 for various values for x, the same for each treatment, are as follows
x Estimated Difference
0
-11.07
2
-8.02
4
-4.96
6
-1.91
8
1.15
10
4.20
12
7.26
It is obvious from this example that treatment differences are very sensitive to the average
value of x.
=
=
=
=
+ ai + eij .
Ia2 ,
Ie2 ,
0.
In this case it is assumed that the levels of a in the sample are a random sample from an
infinite population with var Ia2 , and similarly for the sample of e. The experiment may
have been conducted to do one of several things, estimate , predict a, or to estimate a2
and e2 . We illustrate these with the following data.
Levels of a ni
1
5
2
2
3
1
4
3
5
8
yi.
10
7
3
8
33
Let us estimate and predict a under the assumption that e2 /a2 = 10. Then we
need to solve these equations.
19
5
15
2
0
12
1
0
0
11
3
0
0
0
13
8
0
0
0
0
18
a
1
a
2
a
3
a
4
a
5
61
10
7
3
8
33
(12)
The solution is [3.137, -.379, .061, -.012, -.108, .439]. Note that a
i = 0. This could have
been anticipated by noting that the sum of the last 4 equations minus the first equation
gives
X
10
a
i = 0.
The inverse of the coefficient matrix is
.0790 .0263 .0132 .0072 .0182 .0351
.0754
.0044
.0024
.0061
.0117
.0855
.0012
.0030
.0059
.0916
.0017
.0032
.0811
.0081
.0712
(13)
V ar(r) =
5
5
0
0
0
0
2
0
2
0
0
0
1
0
0
1
0
0
3
0
0
0
3
0
8
0
0
0
0
8
19 5 2 1
5 0 0
2 0
3
0
0
0
3
8
0
0
0
0
8
5
5
0
0
0
0
2
0
2
0
0
0
1
0
0
1
0
0
3
0
0
0
3
0
8
0
0
0
0
8
a2 +
e2 .
This gives
) = .27163 a2 + .06802 e2 ,
E(
a0 a
a2 = .595.
and using
e2 = 2.8, we obtain
Finite Levels of a
Suppose now that the five ai in the sample of our example of Section 7 comprise all
of the elements of the population and that they are unrelated. Then
..
V ar(a) =
.25
1
.25
.
1
a2 .
Let us assume that e2 /a2 = 12.5. Then the mixed model equations are the OLS equations
premultiplied by
1
0
0
0
0
0
.08
.02
.02
.02
.
(14)
.08 .02
.08
11
This gives the same solution as that to (11). This is because a2 of the infinite model is
times a2 of the finite model. See Section 15.9. Now
is a predictor of
+
1X
ai
5 i
aj
1X
ai .
5 i
5
4
and a
j is a predictor of
Let us find the Method 1 estimate of a2 in the finite model. Again we compute i yi.2 /ni
and y..2 /n. . Then the coefficient of e2 in each of these is the same as in the infinite model,
that is 5 and 1 respectively. For the coefficients of a2 we need the contribution of a2 to
V ar(rhs). This is
P
5
5
0
0
0
0
2
0
2
0
0
0
38.5
1
0
0
1
0
0
3
0
0
0
3
0
8
0
0
0
0
8
14
1
..
14
(left matrix)0
4.0
.5 1.5 4.
.
1. .75 2.
9. 6.
64.
(15)
Then the coefficient of a2 in i yi.2 /ni is tr[dg(5, 2, 1, 3, 8)]1 times the lower 55 submatrix
of (15) = 19.0. The coefficient of a2 in y..2 /n. = 38.5/19 = 2.0263. Thus we need only
the diagonals of (15). Assuming again that e2 = 2.8, we find
a2 = .231. Note that in
the infinite model
a2 = .288 and that 45 (.231) = .288 except for rounding error. This
demonstrates that we could estimate a2 as though we had an infinite model and estimate
and predict a using
a2 /
e2 in mixed model equations for the infinite model. Remember
that the resulting inverse does not yield directly V ar(
) and V ar(
a a). For this preand post-multiply the inverse by
P
1
5
5
0
0
0
0
0
1
4
1
1
1
1
1
1
4
1
1
1
1
1
1
4
1
1
1
1
1
1
4
1
1
1
1
1
1
4
=
=
=
=
=
+ si + eij .
As2 ,
Ie2 ,
0,
10.
n. n1. n2. . . .
0 0 0
...
2
1
2
n1. n1. 0 . . .
0
A
e /s
+
0
n2. 0
..
..
..
.
.
.
s1
s2
..
.
(16)
A=
0 .5 .5 0
1. 0
0 .5
1. .25 0
.
1 0
19
5
65/3
2
1
3
8
0
20/3 20/3
0
s1
46/3
0
0
20/3 s2
s =
43/3
0
0
3
s4
49/3
0
64/3
s5
61
10
7
3
8
33
(17)
The solution is (3.163, -.410, .232, -.202, -.259, .433). Note that i si 6= 0 in contrast to
the case in which A = I. Unbiased estimators of e2 and s2 can be obtained by computing
Method 1 type quadratics, that is
P
y0 y
X
i
13
yi.2 /ni
and
X
However, the expectations must take into account the fact that V ar(s) 6= Is2 , but rather
As2 . In a non-inbred population
E(y0 y) = n. (s2 + e2 ).
For an inbred population the expectation is
X
ni aii a2 + n. e2 ,
where aii is the ith diagonal element of A. The coefficients of e2 in yi.2 /ni and y..2 /n. are
the same as in an unrelated sample of sires. The coefficients of s2 require the diagonals
of V ar(rhs). For our example, these coefficients are
P
5
5
0
0
0
0
2
0
2
0
0
0
1
0
0
1
0
0
3
0
0
0
3
0
8
0
0
0
0
8
A (left matrix)0
25.
0 2.5
7.5
0
4.
0
0 8.
.
=
1.
.75
0
9.
0
64.
(18)
2
y1.
y2
..
ni
n.
Cs is the last 5 rows of the inverse of the mixed model coefficient matrix.
V ar(rhs) = Matrix (18) s2 + (OLS coefficient matrix) e2 .
Then
.0788 .0527
.0443
.0526 .0836
.0425
.0303
.0420
.0561
2
.0285
.0283 .0487
V ar(s) =
s +
.0544 .0671
.1014
.01284 .00774
.00603
.00535 .01006
.00982
.00516
.00677
.00599
.00731
.00159 .00675
e .
.01133 .00883
.01462
s0 A1s = .36018, with expectation .05568 e2 + .22977 s2 .
e2 for approximate MIVQUE
can be computed from
X
y0 y
yi.2 /ni. .
i
15
Chapter 17
The Two Way Classification
C. R. Henderson
1984 - Guelph
(1)
We shall be concerned first with a model in which a and b are both fixed, and as a
consequence so is . For convenience let
ij = + ai + bj + ij .
(2)
Then it is easy to prove that the only estimable linear functions are linear functions of
ij that are associated with filled subclasses (nij > 0). Further notations and definitions
are:
Row mean =
i. .
(3)
Its estimate is sometimes called a least squares mean, but I agree with Searle et al. (1980)
that this is not a desirable name.
Column mean
Row effect
Column effect
General mean
Interaction effect
=
=
=
=
=
.j .
i.
.. .
.j
.. .
.. .
ij
i. .
.j +
.. .
(4)
(5)
(6)
(7)
(8)
From the fact that only ij for filled subclasses are estimable, missing subclasses result in
the parameters of (17.3) . . . (17.8) being non-estimable.
All row effects, columns effects, and interaction effects are non-estimable if one or more
nij = 0. Due to these non-estimability considerations, mimicking of either the balanced
or the filled subclass estimation and tests of hypotheses wanted by many experimenters
present obvious difficulties. We shall present biased methods that are frequently used and
a newer method with smaller mean squared error of estimation given certain assumptions.
(10)
1 X
1 X X
y
y .
(11)
ij.
j
i
j ij.
c
rc
Thus BLUE of any of (17.3), . . . , (17.8) is that same function of
ij , where
ij = yij. .
The variances of any of these functions are simple to compute. Any of them can be
P P
expressed as i j kij ij with BLUE =
X X
i
kij yij. .
(12)
X X
i
kij2 /nij .
(13)
X X
i
tij yij.0
is
e2
X X
i
(14)
The numbers required for tests of hypotheses are (17.13) and (17.14) and the associated
BLUEs. Consider a standard ANOVA, that is, mean squares for rows, columns, R C.
The R C sum of squares with (r 1)(c 1) d.f. can be computed by
X X
i
2
yij.
Reduction under model with no interaction.
nij
(15)
Di
Dj
Nij
0
yi
0
yj
=
=
=
=
=
ao
bo
yi
yj
(16)
(17)
Sums of squares for rows and columns can be computed conveniently by the method of
weighted squares of means, due to Yates (1934). For rows compute
i =
1 X
y (i = 1, . . . , r), and
j ij.
c
ki1 =
(18)
1 X 1
.
j n
c2
ij
ki i2 (
k )2 /
i i i
k.
i i
(19)
The column S.S. with c 1 d.f. is computed in a similar manner. The error mean
square for tests of these mean squares is
(y0 y
X X
i
y 2 /nij )/(n..
j ij.
rc).
(20)
An obvious limitation of the weighted squares of means for testing rows is that the test
refers to equal weighting of subclasses across columns. This may not be what is desired
by the experimenter.
An illustration of a filled subclass 2 way fixed model is a breed by treatment design
with the following nij and yij. .
Breeds
1
2
3
4
1
5
4
5
4
nij
2
2
2
1
5
Treatments
yij.
3 1 2
1 68 29
2 55 30
4 61 13
4 47 65
3
3
19
36
61
75
XX
i
2
yij.
/nij = 8207.5.
Let us test the hypothesis that interaction is negligible. The reduction under a model
with no interaction can be obtained from a solution to equation (17.21).
8 0
0
0
10
0
0
0
13
5
4
5
4
18
2
2
1
5
0
10
b1
b2
b3
b4
t1
t2
116
121
135
187
231
137
(21)
The solution is (18.5742, 18.5893, 16.3495, 17.4624, -4.8792, -4.0988)0 . The reduction is
8187.933. Then R C S.S. = 8207.5 - 8187.923 = 19.567. S. S. for rows can be formulated
as a test of the hypothesis
1 1 1 0 0 0 0 0 0 1 1 1 11
. = 0.
0
K = 0 0 0 1 1 1 0 0 0 1 1 1
..
0 0 0 0 0 0 1 1 1 1 1 1
43
The
ij are (13.6, 14.5, 19.0, 13.75, 15.0, 18.0, 12.2, 13.0, 15.25, 11.75, 13.0, 18.75).
= (3.6 3.25 3.05)0 .
K0
= K0 [diag (5 2 . . . 4)]1 Ke2
V ar(K0 )
2.4 .7
.7
2
1.95 .7
=
e .
2.15
= 20.54 = SS for rows.
1 K0
0 [V ar(K0 )]
e2 (K0 )
SS for cols. is a test of
K0 =
1 0 1 1 0 1 1 0 1 1 0 1
0 1 1 0 1 1 0 1 1 0 1 1
0
=
K
19.7
15.5
= 0.
2.9 2.0
4.2
=
V ar(K0 )
.
!
e2 .
0 [V ar(K0 )]
1 K0
= 135.12 = SS for Cols.
e2 (K0 )
Next we illustrate weighted squares of means to obtain these same results. Sums of
squares for rows uses the values below
i
ki
15.7
5.29412
15.5833
7.2
13.4833 6.20690
14.5
12.85714
1
2
3
4
X
i
ki i2 = 6885.014.
ki i )2 /
ki = 6864.478.
Diff. = 20.54 as before.
i
X
X
kj )2 /
kj b2j = 6844.712.
kj = 6709.590.
Diff. = 135.12 as before.
Another interesting method for obtaining estimates and tests involves setting up least
squares equations using Lagrange multipliers to impose the following restrictions
X
Xi
i
ij = 0 for i = 1, . . . , r.
ij = 0 for j = 1, . . . , c.
o = 0
A solution is
b0 = (0, 14, 266, 144)/120.
t0 = (1645, 1771, 2236)/120.
0 = (13, 31, 44, 19, 43, 62, 85, 55, 140, 91, 67, 158)/120.
Using these values,
ij are the yij. , and the reduction in SS is
X X
i
2
yij.
/nij. = 8207.5.
Next the SS for rows is this reduction minus the reduction when bo is dropped from
the equations restricted as before. A solution in that case is
t0 = (12.8133, 14.1223, 17.3099).
0 = (.4509, .4619, .0110, .4358, .1241, .3117,
.0897, 1.4953, 1.4055, .7970, .9093, 1.7063),
and the reduction is 8186.960. The row sums of squares is
8207.5 8186.960 = 20.54 as before.
Now drop to from the equations. A solution is
0 = (13.9002, 15.0562, 13.5475, 14.4887).
b
0 = (.9648, .9390, 1.9039, .2751, .2830, .5581,
0
X
X
0
0y
,
=X
0 is the submatrix of X
referring to b, t only, and y
is a vector of subclass means.
where X
These equations are
3 0 0 0
3 0 0
3 0
1
1
1
1
4
1
1
1
1
0
4
1
1
1
1
0
0
4
b1
b2
b3
b4
t1
t2
t3
47.1
46.75
40.45
43.5
51.3
55.5
71.0
(22)
A solution is
b0 = (0, 14, 266, 144)/120,
t0 = (1645, 1771, 2236)/120.
This is the same as in the restricted least squares solution. Then
ij = yij. boi toj ,
which gives the same result as before. More will be said about these alternative methods
in the missing subclass case.
6
When one or more subclasses is missing, the estimates and tests described in Section 2
cannot be effected. What should be done in this case? There appears to be no agreement
among statisticians. It is of course true that any linear functions of ij in which nij > 0
can be estimated by BLUE and can be tested, but these may not be of any particular
interest to the researcher. One method sometimes used, and this is the basis of a SAS
Type 4 analysis, is to select a subset of subclasses, all filled, and then to do a weighted
squares of means analysis on this subset. For example, suppose that in a 3 4 design,
subclass (1,2) is missing. Then one could discard all data from the second column, leaving
a 33 design with filled subclasses. This would mean that rows are compared by averaging
over columns 1,3,4 and only columns 1,3,4 are compared, these averaged over the 3 rows.
One could also discard the first row leaving a 2 4 design. The columns are compared
by averaging over only rows 2 and 3, and only rows 2 and 3 are compared, averaging over
all 4 columns. Consequently this method is not unique because usually more than one
filled subset can be chosen. Further, most experimenters are not happy with the notion
of discarding data that may have been costly to obtain.
Another possibility is to estimate ij for missing subclasses by some biased procedure.
For example, one can estimate ij such that E(
ij ) = + ai + bj + some function of the
ij associated with filled subclasses. One way of doing this is to set up least squares
equations with the following restrictions.
X
j ij
i ij
ij
= 0 for i = 1, . . . , r.
= 0 for j = 1, . . . , c.
= 0 if nij = 0.
This is the method used in Harveys computer package. When equations with these
restrictions are solved,
equations with dropped. Then the biased estimators in this case for filled as well as
empty subclasses, are
ij = o + aoi + boj .
(23)
A third possibility is to assume some prior values of e2 and squares and products
of ij and compute as in Section 9.1. Then all
ij are biased by ij but have in their
expectations + ai + bj . Finally one could relax the requirement of + ai + bj in the
expectation of
ij . In that case one would assume average values of squares and products
of the ai and bj as well as for the j and use the method described in Section 9.1.
Of these biased methods, I would usually prefer the one in which priors on the ,
but not on a and b are used. In most fixed, 2 way models the number of levels of a and
b are too small to obtain a good estimate of the pseudo-variances of a and b.
We illustrate these methods with a 4 3 design with 2 missing subclasses as follows.
nij
2 3
2 3
2 0
0 1
1
1 5
2 4
3 3
4
2
5
4
1
30
21
12
yij.
2 3 4
11 13 7
6 9
3 15
First we illustrate estimation under sum to 0 model for and in addition the assumption that 23 = 32 = 0. The simplest procedure for this set of restrictions is to solve
for ao , bo in equations (17.24).
4 0 0 1
3 0 1
3 1
1
1
0
0
2
1
0
1
0
0
2
1
1
1
0
0
0
3
ao
bo
19.333
10.05
10.75
15.25
8.5
7.333
9.05
(24)
+ 11
+ 13
+ 27 = 19.333, etc. for others. A solution is
The first right hand side is 30
5
2
3
(3.964, 2.286, 2.800, 2.067, 1.125, .285, 0). The estimates of ij are yij. for filled subclasses
and 2.286 + .285 for
23 and 2.800 + 1.125 for
32 . If ij are wanted they are
11 = y11 3.964 1.125
etc, for filled subclasses, and 0 for 23 and 32 .
8
The same results can be obtained, but with much heavier computing by solving least
P
P
squares equations with restrictions on that are j ij = 0 for all i, i ij = 0 for all
j, and ij = 0 for subclasses with nij = 0. From these equations one can obtain sums of
squares that mimic weighted squares of means. A solution to the restricted equations is
o
ao
bo
o
=
=
=
=
0,
(3.964, 2.286, 2.800, 2.067)0 .
(1.125, .285, 0)0 .
(.031, .411, .084, .464, .897, .411,
0, .486, .866, 0, .084, .950)0 .
Note that the solution to conforms to the restrictions imposed. Also note that this
solution is the same as the one previously obtained. Further,
ij = o + aoi + boj + ijo = yij.
for filled subclasses.
A test of hypothesis that the main effects are equal, that is
i. =
i0 for all pairs of i,
i , can be effected by taking a new solution to the restricted equations with ao dropped.
Then the SS for rows is
( o )0 RHS ( o )0 RHS ,
(25)
0
where o is a solution to the full set of equations, and this reduction is simply i j yij2 /nij. ,
o is a solution with a deleted from the set of equations, and RHS is the right hand side.
This tests a nontestable hypothesis inasmuch as the main effects are not estimable when
subclasses are missing. The test is valid only if ij are truly 0 for all missing subclasses,
and this is not a testable assumption, Henderson and McAllister (1978). If one is to use
a test based on non-estimable functions, as is done in this case, there should be some
attempt to evaluate the numerator with respect to quadratics in fixed effects other than
those being tested and use this in the denominator. That is, a minimum requirement
could seem to be a test of this sort.
P P
subclass mean square, unless 23 and 32 are truly equal to zero, and we cannot test this
assumption.
Similarly a solution when b is dropped is
o = 0,
ao = (5.089, 3.297, 3.741)0 ,
o = (.098, .254, .355, .003, 1.035, .254,
0, .781, 1.133, 0, .355, .778)0 .
The reduction is 554.81. Then if 23 and 32 = 0, the numerator sum of squares with 3 df
is 579.03-554.81. The sum of squares for interaction with (3-1)(4-1)-2 = 4 df. is 579.03
- reduction with and the Lagrange multiplier deleted. This latter reduction is 567.81
coming from a solution
o = 0,
ao = (3.930, 2.296, 2.915)0 , and
o = (2.118, 1.137, .323, 0)0 .
Another biased estimation method sometimes suggested is to ignore . That is, least
squares equations with only o , ao , bo are solved. This is sometimes called the method
of fitting constants, Yates (1934). This method has quite different properties than the
method of Section 17.4. Both obtain estimators of ij with expectations +ai +bj + linear
functions of ij . The method of section 17.4 minimizes the contribution of quadratics in
to MSE, but does a poor job of controlling on the contribution of e2 . In contrast,
the method of fitting constants minimizes the contribution of e2 but does not control
quadratics in . The method of the next section is a compromise between these two
extremes.
A solution for our example for the method of this section is
o = 0,
ao = (3.930, 2.296, 2.915)0 ,
bo = (2.118, 1.137, .323, 0)0 .
Then if we wish
ij these are o + aoi + boj .
A test of row effects often suggested is to compute the reduction in SS under the
model with dropped minus the reduction when a and are dropped, the latter being
P
2
/n.j. . Then this is tested against some denominator. If
e2 is used, the
simply j y.j.
10
The methods of the two preceding sections control in the one case on and the other
on e2 as contributors to MSE. The method of this section is an attempt to control on
both. The logic of the method depends upon the assumption that there is no pattern of
values of , such, for example as linear by columns or linear by rows. Then consider the
matrix of squares and products of elements of ij for all possible permutations of rows
and columns. The average values are found to be
ij2
ij ij 0
ij i0 j
ij i0 j 0
=
=
=
=
.
/(c 1).
/(r 1).
/(r 1)(c 1).
(26)
6 2 2 2 3
1
1
1 3
1
1
1
6 2 2
1 3
1
1
1 3
1
1
6 2
1
1 3
1
1
1 3
1
6
1
1
1 3
1
1
1 3
6 2 2 2 3
1
1
1
6 2 2
1 3
1
1
6 2
1
1 3
1
6
1
1
1 3
6
2
2
2
6 2 2
6
2
(27)
Then add 1 to each of the last 12 diagonals. The resulting coefficient matrix is (17.28) . . .
(17.31). The right hand side vector is (3.05, 1.8, 1.5, 3.15, .85, .8, 2.6, .4, 1.8, -4.8, .95,
11
1.15, -2.25, .15, -3.55, -1.55, .45, 4.65)0 = (a1 a2 a3 b1 b2 b3 ). Thus and b4 are deleted,
which is equivalent to obtaining a solution with o = 0, bo4 = 0.
Upper left 9 9
.6
0
0
.25
.1
.15 .25
.1 .15
0
.55
0
.2
.1
0
0
0
0
0
0
.4
.15
0
.05
0
0
0
.25
.2 .15
.6
0
0 .25
0
0
.1
.1
0
0
.2
0
0
.1
0
.15
0 .05
0
0
.2
0
0 .15
.8 .25 .2
.45 .1 .25 2.5 .2 .3
.4
.15
.4 .15
.3 .25 .5 1.6 .3
0
.55
.2 .15 .1
.75 .5 .2 1.9
(28)
Upper right 9 9
.1
0
0 0
0
.2
.1 0
0
0
0 0
0
.2
0 0
0
0
.1 0
0
0
0 0
.2 .6
.1 0
.2
.2 .3 0
.2
.2
.1 0
0
0 0
0
.25
0 0
0
0
.15 0
.05
0
.15 0
0
0
0 0
0
0
0 0
.05
.25 .45 0
.05
.25
.15 0
.05
.25
.15 0 .15
0
0
.2
0
0
0
.2
.2
.2
(29)
Lower left 9 9
.4
.4
.2
0
.2
.4
.2
0
.2
.45
.5
.3
1.1
.9
.25
.15
.55
.45
.4 .15 .1 .25 .5 .2 .3
.2
0 .1
.2 .75
.1
.15
.4
0
.3
.2
.25 .3
.15
.2
0 .1 .6
.25
.1 .45
.4
0 .1
.2
.25
.1
.15
.4 .45
.2
.05 .75
.1
.15
.8
.15 .6
.05
.25 .3
.15
.4
.15
.2 .15
.25
.1 .45
.8
.15
.2
.05
.25
.1
.15
12
(30)
Lower right 9 9
1.6
.1
.1
.1
.3
.1
.1
.1
.3
.2
2.2
.4
.4
.4
.6
.2
.2
.2
.1
0 .75
.15
0
.05 .6
.2
0 .5 .45
0
.05
.2
1.6
0 .5
.15
0
.05
.2
.2 1.0 .5
.15
0 .15
.2
.2
0
2.5
.15
0
.05 .6
.1
0
.25
1.9
0 .1 .4
.3
0
.25 .3 1.0 .1 .4
.1
0
.25 .3
0
1.3 .4
.1
0 .75 .3
0 .1 2.2
(31)
The solution is
ao = (3.967, 2.312, 2.846)0 .
bo = (2.068, 1.111, .288, 0)0 .
o displayed as a table is
1
2
3
4
1 -.026 .230 .050 -.255
2 .614 -.230
0 -.384
3 -.588
0 -.050 .638
ij = aoi + boj + ijo . The same
Note that the ijo sum to 0 by rows and columns. Now the
solution can be obtained more easily by treating as a random variable with V ar = 12I.
rc
The value 12 comes from (r1)(c1)
6 = (3)4
(6) = 12. The resulting coefficient matrix
(2)3
(times 60) is in (17.32). The right hand side vector is (3.05, 1.8, 1.5, 3.15, .85, .8, 1.5, .55,
.65, .35, 1.05, .3, 0, .45, .6, 0, .15, .75)0 . and b4 are dropped as before.
36
0
33
0 15
0 12
24 9
36
6
6
0
0
12
9 15 6 9 6 0 0 0 0 0 0 0 0
0 0 0 0 0 12 6 0 15 0 0 0 0
3 0 0 0 0 0 0 0 0 9 0 3 12
0 15 0 0 0 12 0 0 0 9 0 0 0
0 0 6 0 0 0 6 0 0 0 0 0 0
12 0 0 9 0 0 0 0 0 0 0 3 0
(32)
diag (20,11,14,11,17,11,5,20,14,5,8,17)
The solution is the same as before. This is clearly an easier procedure than using the
equations of (17.28). The inverse of the matrix of (17.28) post-multiplied by
I 0
0 P
13
where P = the matrix of (17.27), is not the same as the inverse of the matrix of (17.32)
with diagonal G, but if we pre-multiply each of them by K0 and then post-multiply by
K, where K0 is the representation of ij in terms of a, b, , we obtain the same matrix,
which is the mean squared error for the
ij under the priors used, e2 = 20 and = 6.
Biased estimates of
ij are in both methods
3.58
.32
8.34
.18
.15
6.01
.46
.28 .32 .87 .09
.64 .43 1.66 2.29 .32
.38
.02 .15
4.16
.04
4.40
.43
1.93
.31
8.34
2.29
.32
33.72 1.54
3.63
Upper right 8 4
.33
.04
.33
.37
.34
.04
1.12
.25
Lower right 4 4
5.66
2.62
33.60
1.00 .50
4.00 2.03
14.08 .73
4.44
1
4
1
4
1
4
1
4
1
4
1
4
1
4
1
4
or the g-inverse of the matrix using diagonal G. This 2 2 matrix gives the MSE for e2 =
20, = 6. Finally premultiply the inverse of this matrix by (K0 o )0 and post-multiply by
K0 o . This quantity is distributed approximately as 22 under the null hypothesis.
1
...
1
r1
1
r1
a2 and
1
...
1
c1
1
c1
2
,
b
2
0
0
P
b b
2
0
0
P
and add 1s to all diagonals except the first. This yields the equations with coefficient
matrix in (17.33) . . . (17.36) and right hand vector = (6.35, 5.60, -1.90, -3.70, 18.75, -8.85,
-9.45, -.45, 2.60, .40, 1.80, -4.80, .95, 1.15, -2.25, .15, -3.55, -1.55, .45, 4.65)0 .
Upper left 10 10
1.55
.6
.55
.4
.6
.2
.2
.55
.25
.1
.5
3.4 1.1 .8
.3
.2
.5
.5
1.0
.4
.2 1.2
3.2 .8
0
.2 .4
.4 .5 .2
.7 1.2 1.1
2.6 .3 .4 .1
.1 .5 .2
2.55
1.2
.75
.6
6.4 .6 .6 1.65 2.25 .3
2.25
0 1.65 .6 1.8 .6
2.8 1.65 .75 .3
1.95 .6
1.35
1.2 1.8 .6 .6
5.95 .75 .3
.35
.8 .25 .2
.45 .1 .25
.25
2.5 .2
.15 .4
.15
.4 .15
.3 .25
.25 .5 1.6
15
(33)
Upper right 10 10
.15
.6
.3
.3
.45
.45
1.35
.45
.3
.3
.1
.4
.2
.2
.3
.3
.3
.9
.2
.2
.2
.4
.8
.4
1.8
.6
.6
.6
.6
.2
.1
.2
.4
.2
.3
.9
.3
.3
.1
.3
0
.25
.15 0
.05
0 .5 .3 0 .1
0
1.0 .3 0 .1
0 .5
.6 0
.2
0 .75 1.35 0 .15
0 .75 .45 0 .15
0 .75 .45 0
.45
0 2.25 .45 0 .15
0
.25 .45 0
.05
0
.25
.15 0
.05
.2
.4
.4
.8
.6
.6
.6
1.8
.2
.2
(34)
Lower left 10 10
.75
0
.55
.2 .15 .1
.75
1.25 .4 .45 .4 .15 .1 .25
.1 .4
.5 .2
0 .1
.2
.3
.2 .3
.4
0
.3
.2
.9
0 1.1
.2
0 .1 .6
.7
.2
.9 .4
0 .1
.2
.25 .4 .25
.4 .45
.2
.05
.45
.2
.15 .8
.15 .6
.05
.15
0
.55 .4
.15
.2 .15
.55
.2 .45
.8
.15
.2
.05
.25 .5 .2
.75 .5 .2
.2 .75
.1
.2
.25 .3
.2
.25
.1
.6
.25
.1
.05 .75
.1
.05
.25 .3
.05
.25
.1
.15
.25
.1
(35)
Lower right 10 10
1.9 .2
.2
.1
0
.25
.15
0 .15
.2
.3 1.6
.2
.1
0 .75
.15
0
.05 .6
.15
.1 2.2 .2
0 .5 .45
0
.05
.2
.15
.1 .4 1.6
0 .5
.15
0
.05
.2
.45
.1 .4 .2 1.0 .5
.15
0 .15
.2
.15 .3 .4 .2
0
2.5
.15
0
.05 .6
.15
.1 .6
.1
0
.25
1.9
0 .1 .4
.15
.1
.2 .3
0
.25 .3 1.0 .1 .4
.45
.1
.2
.1
0
.25 .3
0
1.3 .4
.15 .3
.2
.1
0 .75 .3
0 .1 2.2
The solution is
= 4.014,
= (.650, .467, .183),
a
.111
.225 .170 .166
=
.303 .515
.
.489 .276
.599
.051 .133
.681
16
(36)
P
P
Note that i a
i = j bj = 0 and the ij sum to 0 by rows and columns. A solution
can be obtained by pretending that a, b, are random variables with V ar(a) = 3I,
V ar(b) = 8I, V ar() = 12I. The coefficient matrix of these is in (17.37) . . . (17.39) and
the right hand side is (6.35, 3.05, 1.8, 1.5, 3.15, .85, .8, 1.55, 1.5, .55, .65, .35, 1.05, .3, 0,
.45, .6, 0, .15, .75). The solution is
= 4.014,
= (.325, .233, .092)0 ,
a
= (.648, .080, .139, .590)0 ,
b
.760
.590
.086 .136
=
0 1.043
.580 .470
.
.367
0 .294
.295
This is a different solution from the one above, but the
ij are identical for the two. These
are as follows, in table form,
a
i =
j bj
X
j
X
i
ij = 2 a
i /a2 = 4 a
i .
ij = 2 bj /b2 = 1.5 bj .
2
a
=
2
b
2
Upper left 8 8
1
120
186
72
112
66 48 72 24 24 66
0 0 30 12 18 12
106 0 24 12 0 30
88 18 0 6 24
87 0 0 0
39 0 0
39 0
81
17
(37)
1
120
30
12
18
12
24
12
0
30
18
0
6
24
30 0 0 30 0 0 0
12 0 0 0 12 0 0
18 0 0 0 0 18 0
12 0 0 0 0 0 12
0 24 0 24 0 0 0
0 12 0 0 12 0 0
0 0 0 0 0 0 0
0 30 0 0 0 0 30
0 0 18 18 0 0 0
0 0 0 0 0 0 0
0 0 6 0 0 6 0
0 0 24 0 0 0 24
(38)
Lower 12 12
= 1201 diag (40, 22, 28, 22, 34, 22, 10, 40, 28, 10, 16, 34)
(39)
K0 for SS columns is
0 0 0 0 3 0 0 3 1 0 0 1 1 0 0 1 1 0 0 1
0 0 0 0 0 3 0 3 0 1 0 1 0 1 0 1 0 1 0 1 /3.
0 0 0 0 0 0 3 3 0 0 1 1 0 0 1
1 0 0 1 1
The two way mixed model is one in which the elements of the rows (or columns) are a
random sample from some population of rows (or columns), and the levels of columns (or
rows) are fixed. We shall deal with random rows and fixed columns. There is really more
than one type of mixed model, as we shall see, depending upon the variance-covariance
matrices, V ar(a) and V ar(), and consequently V ar(), where = vector of elements,
+ ai + bj + ij . The most commonly used model is
V ar() =
C 0 0
0 C 0
.. ..
..
,
. .
.
0 0
C
18
(40)
where C is q q, q being the number of columns. There are p such blocks down the
diagonal, where p is the number of rows. C is a matrix with every diagonal = v and every
off-diagonal = c. If the rows were sires and the columns were traits and if V ar(e) = Ie2 ,
this would imply that the heritability is the same for every trait, 4 v/(4v + e2 ), and the
genetic correlation between any pair of traits is the same, c/v. This set of assumptions
should be questioned in most mixed models. Is it logical to assume that V ar(ij ) =
V ar(ij 0 ) and that Cov(ij , ik ) = Cov(ij , im )? Also is it logical to assume that
V ar(eijk ) = V ar(eij 0 k )? Further we cannot necessarily assume that ij is uncorrelated
with i0 j . This would not be true if the ith sire is related to the i0 sire. We shall deal more
specifically with these problems in the context of multiple trait evaluation.
Now let us consider what assumptions regarding
a
V ar
will lead to V ar() like (17.40). Two models commonly used in statistics accomplish this.
The first is based on the model for unrelated interactions and main effects formulated in
Section 15.4.
V ar(a) = Ia2 ,
since the number of levels of a in the population , and
V ar(ij ) = 2 .
Cov(ij , ij 0 ) = 2 /(q 1).
Cov(ij , i0 j ) = 2 /(one less than population levels of a) = 0.
Cov(ij , i0 j ) = 2 /(q 1) (one less than population levels of a) = 0.
This leads to
P 0
V ar() = 0 P
.. ..
. .
0 y2 ,
..
.
(41)
where P is a matrix with 10 s in diagonals and 1/(q 1) in all off-diagonals. Under this
model
V ar(ij ) = a2 + 2 .
Cov(ij , ij 0 ) = a2 2 /(q 1).
(42)
An equivalent model often used that is easier from a computational standpoint, but
less logical is
2
2
V ar(a ) = Ia
, where a
= a2 2 /(q 1).
2
2
V ar( ) = I
, where
= q2 /(q 1).
19
(43)
Note that we have re-labelled the row and interaction effects because these are not the
same variables as a and .
The results of (17.43) come from principles described in Section 15.9. We illustrate
these two models (and estimation and prediction methods) with our same two way example. Let
V ar(e) = 20I, V ar(a) = 4I, and
P 0 0
V ar() = 6
0 P 0 ,
0 0 P
where P is a 4 4 matrix with 1s for diagonals and 1/3 for all off-diagonals. We set up
the least squares equations with deleted, multiply the first 3 equations by 4 I3 and the
last 12 equations by V ar() described above. Then add 1 to the first 4 and the last 12
diagonal coefficients. This yields equations with coefficient matrix in (17.44) . . . (17.47).
The right hand side is (12.2, 7.2, 6.0, 3.15, .85, .8, 1.55, 5.9, -1.7, -.9, -3.3, 4.8, -1.2, -3.6,
0, 1.8, -3.0, -1.8, 3.0)0 .
Upper left 10 10
3.4
0
0 1.0
.4
.6
.4 1.0
.4
.6
0 3.2
0
.8
.4
0 1.0
0
0
0
0
0 2.6
.6
0
.2
.8
0
0
0
.25 .2 .15
.6
0
0
0 .25
0
0
.1 .1
0
0
.2
0
0
0
.1
0
.15
0 .05
0
0
.2
0
0
0 .15
.1 .25 .2
0
0
0 .55
0
0
0
.8
0
0 1.5 .2 .3 .2 2.5 .2 .3
.4
0
0 .5
.6 .3 .2 .5 1.6 .3
0
0
0 .5 .2
.9 .2 .5 .2 1.9
(44)
Upper right 10 9
.4 0 0 0
0
0 0
0 0
0 .8 .4 0 1.0
0 0
0 0
0 0 0 0
0 .6 0 .2 .8
0 .2 0 0
0 .15 0
0 0
0 0 .1 0
0
0 0
0 0
0 0 0 0
0
0 0 .05 0
.1 0 0 0 .25
0 0
0 .2
.2 0 0 0
0
0 0
0 0
.2 0 0 0
0
0 0
0 0
.2 0 0 0
0
0 0
0 0
20
(45)
Lower left 9 10
.4
0
0 .5
0
.5
0 1.2
0 .3
0 .4
0 1.1
0 .4
0
.9
0 .4
0
0
.4
.9
0
0 .8 .3
0
0 .4 .3
0
0
.8 .3
.2
.2
.6
.2
.2
0
0
0
0
.3
0
0
0
0
.1
.1
.3
.1
.6 .5 .2 .3
.5
0
0
0
.5
0
0
0
.5
0
0
0
1.5
0
0
0
.4
0
0
0
.4
0
0
0
.4
0
0
0
1.2
0
0
0
(46)
Lower right 9 9
1.6
0
0
0
0
0
0
0
0
0 2.2 .2
0 .5
0
0
0
0
0 .4 1.6
0 .5
0
0
0
0
0 .4 .2 1.0 .5
0
0
0
0
0 .4 .2
0 2.5
0
0
0
0
0
0
0
0
0 1.9
0 .1 .4
0
0
0
0
0 .3 1.0 .1 .4
0
0
0
0
0 .3
0 1.3 .4
0
0
0
0
0 .3
0 .1 2.2
(47)
The solution is
= (.563, .437, .126)0 .
a
= (5.140, 4.218, 3.712, 2.967)0 .
b
.104
.163 .096 .170
=
.219 .414
.421 .226
.
.524
.063 .122
.584
The ij sum to 0 by rows and columns.
When we employ the model with V ar(a ) = 2I and V ar( ) = 8I, the coefficient
matrix is in (17.48) . . . (17.50) and the right hand side is (3.05, 1.8, 1.5, 3.15, .85, .8, 1.55,
1.5, .55, .65, .35, 1.05, .3, 0, .45, .6, 0, .15, .75)0 .
Upper left 7 7
1.1
0
1.05
0 .25 .1 .15 .1
0 .2 .1
0 .25
.9 .15 0 .05 .2
.6 0
0
0
.2
0
0
.2
0
.55
21
(48)
.25
0
0 .25 0
0
0
.1
0
0
0 .1
0
0
.15
0
0
0 0 .15
0
.1
0
0
0 0
0 .1
0 .2
0 .2 0
0
0
0 .1
0
0 .1
0
0
0
0
0
0 0
0
0
0 .25
0
0 0
0 .25
0
0 .15 .15 0
0
0
0
0
0
0 0
0
0
0
0 .05
0 0 .05
0
0
0 .2
0 0
0 .2
(49)
Lower right 12 12
= diag (.375, .225, .275, .225, .325, .225, .125, .375, .275, .125, .175, .325).
(50)
The solution is
= (.282, .219, .063)0 , different from above.
a
= (5.140, 4.218, 3.712, 2.967)0 , the same as before.
b
.385
.444
.185
.112
=
0 .632
.202 .444
,
.588
0 .185
.521
different from above. Now the sum to 0 by columns, but not by rows. This sum is
2
2
a
i /a
= 4
ai .
As we should expect, the predictions of subclass means are identical in the two solutions.
These are
1
...
1
3
1
3
1
22
b2 ,
where b2 is a pseudo-variance.
Suppose we want to estimate the variances. In that case the model with
2
2
V ar(a ) = Ia
and V ar( ) = I
is obviously easier to deal with than the pedagogically more logical model with V ar()
2
2
not a diagonal matrix. If we want to use that model, we can estimate a
and
and
2
2
then by simple algebra convert those to estimates of a and .
23
Chapter 18
The Three Way Classification
C. R. Henderson
1984 - Guelph
(1)
We first illustrate a fixed model with V ar(e) = Ie2 . A simple way to approach this
model is to write it as
yijkm = ijk + eijkm .
(2)
Then BLUE of ijk is y ijk. provided nijk > 0. Also BLUE of
XXX
i
pijk ijk =
XXX
i
pijk y ijk. ,
where summation is over subclasses that are filled. But if subclasses are missing, there
may not be linear functions of interest to the experimenter. Analogous to the two-way
fixed model we have these definitions.
a effects
b effects
c effects
ab interactions
abc interactions
=
=
=
=
=
i.. ... ,
.j. ... ,
..k ... ,
ij. i.. .j. + ... ,
ijk ij. i.k .jk
+i.. + .j. + ..k ... .
(3)
None of these is estimable if a single subclass is missing. Consequently, the usual tests of
hypotheses cannot be effected exactly.
Suppose we wish to test the hypotheses that a effects, b effects, c effects, ab interactions, ac interactions, bc interactions, and abc interactions are all zero where these are
defined as in (18.3). Three different methods will be described. The first two involve
setting up least squares equations reparameterized by
X
ai = b j = c k = 0
(4)
jk
We illustrate this with a 2 3 4 design with subclass numbers and totals as follows
b1
a
1
2
a c1
1 53
2 111
c1
3
7
c2
5
2
c3
2
5
b1
c2 c3
110 41
43 89
c4
6
1
c1
5
6
nijk
b2
c2 c3
2 1
2 4
b3
c4
4
3
c1
5
3
c2
2
4
yijk.
b2
c4 c1 c2 c3 c4 c1
118 91 31 9 55 96
9 95 26 61 35 52
c3
1
6
c4
1
1
b3
c2
31
55
c3 c4
8 12
97 10
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
The first column pertains to , the second to a, the next two to b, and the last 3 to c. The
remaining 17 columns are formed by operations on columns 2-7. Column 8 is formed by
taking the products of corresponding elements of columns 2 and 3. Thus these are 1(1),
1(1), 1(1), 1(1), 1(0), . . ., -1(-1). The other columns are as follows: 9 = 2 4, 10 = 2 5,
11 = 2 6, 12 = 2 7, 13 = 3 5, 14 = 3 6, 15 = 3 7, 16 = 4 5, 17 = 4 6,
18 = 4 7, 19 = 2 13, 20 = 2 14, 21 = 2 15, 22 = 2 16, 23 = 2 17, 24 = 2 18.
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1
1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1
Then the least squares coefficient matrix is X NX, where N is a diagonal matrix of nijk .
0
The right hand sides are X y., where y. is the vector of subclass totals. The coefficient
matrix of the equations is in (18.5) . . . (18.7). The right hand side is (1338, -28, 213, 42,
259, 57, 66, 137, 36, -149, -83, -320, -89, -38, -80, -30, -97, -103, -209, -16, -66, -66, 11,
19).
Upper left 12 12
81 7
81
8 4 13
1
3
6
2 9 5 17
6 2 9 5 17
8
4
13
1
3
54 23 3 4 5 4 5 11
0 3
50 2 7 7 5 8 4
1
1
45 16
16 11 4
3
6
6
33
16
0
1
6
7
6
.
35 3
1
6
6 5
54 23 3 4 5
50 2 7 7
45 16
16
33
16
35
(5)
3 4 5 2 7 7 11
0 3 4
1
1
11
0 3 4
1
1 3 4 5 2 7 7
9
4
5
6
4
5 7 4 13
2 2 5
6
4
5 10
1
3
2 2 5
0 3 9
7
5
5
8
5
5 1
5
5 2
1
1
5
6
5
5
3
5
5 10
5
1
3
1
.
5
5
5
5
5
3
5
5
7
1
1
3
7 4 13
2 2 5
9
4
5
6
4
5
2 2 5
0 3 9
6
4
5 10
1
3
1
5
5 2
1
1
7
5
5
8
5
5
5 10
5
1
3
1
5
6
5
5
3
5
5
5
7
1
1
3
5
5
5
5
5
3
(6)
Lower right 12 12
27
9
22
9 10
9 2
23 2
28
2
8
2
9
19
2
2
9
9
9
21
3
5
5 2
0
0
5
6
5 0 2
0
5
5 3 0
0 5
2
0
0 2
1
1
0 2
0 1 1
1
0
0 5 1
1 7
.
27
9
9 10
2
2
22
9 2
8
2
23 2
2
9
28
9
9
19
9
21
6
(7)
The resulting solution is (15.3392, .5761, 2.6596, -1.3142, 2.0092, 1.5358, -.8864, 1.3834,
-.4886, .4311, .2156, -2.5289, -3.2461, 2.2154, 2.0376, .9824, -1.3108, -1.0136, -1.4858,
-1.9251, 1.9193, .6648, .9469, -6836).
One method for finding the numerator sums of squares is to compare reductions,
that is, subtracting the reduction when each factor and interaction is deleted from the
reduction under the full model. For A, equation and unknown 2 is deleted, for B equations
3 and 4 are deleted, . . . , for ABC equations 19-24 are deleted. The reduction under the
full model is 22879.49 which is also simply
XXX
i
2
yijk.
/nijk .
SS
17.88
207.44
192.20
55.79
113.25
210.45
92.73
(8)
o
o
for ABC.
, . . . , 24
oi is a subvector of the solution, 2o for A; 3o , 4o for B, . . . , 17
o
V ar( i ) is the corresponding diagonal block of the inverse of the 24 24 coefficient
matrix, not shown, multiplied by e2 . Thus
.0344 .0140
.0140
.0352
!1
2.6596
1.3142
etc. The terms inverted are diagonal blocks of the inverse of the coefficient matrix. These
give the same results as by the first method.
The third method is to compute
1
0 (V ar(K0i ))
(K0i )
7
e2 .
K0
(9)
V ar()/
e = N ,
When one or more subclasses is missing, the usual estimates and tests of main effects
and interactions cannot be made. If one is satisfied with estimating and testing functions
like K0 , where is the vector of ijk corresponding to filled subclasses, BLUE and exact
tests are straightforward. BLUE of
K0 = K0 y,
(10)
where y. is the vector of means of filled 3 way subclasses. The numerator SS for testing
the hypothesis that K0 = c is
(K0 y. c)0 V ar(K0 y.)1 (K0 y. c)e2 .
(11)
V ar(K0 y.)/e2 = K0 N1 K,
(12)
Unfortunately, if many subclasses are missing, the experimenter may have difficulty
in finding functions of interest to estimate and test. Most of them wish correctly or
otherwise to find estimates and tests that mimic the filled subclass case. Clearly this is
possible only if one is prepared to use biased estimators and approximate tests of the
functions whose estimators are biased.
We illustrate some biased methods with the following 2 3 4 example.
b1
a
1
2
c1
3
7
c2
5
2
c3
2
5
b1
a c1
1 53
2 111
c2 c3
110 41
43 89
c4
6
0
c1
5
6
nijk
b2
c2 c3
2 0
2 4
b3
c4
4
3
c1
5
3
c2
2
4
yijk.
b2
c4 c1 c2 c3 c4 c1
118 91 31 55 96
95 26 61 35 52
c3
0
6
c4
0
0
b3
c2
31
55
c3 c4
97
Note that 5 of the potential 24 abc subclasses are empty and one of the potential 12 bc
subclasses is empty. All ab and ac subclasses are filled. Some common procedures are
1. Estimate and test main effects pretending that no interactions exist.
2. Estimate and test main effects, ac interactions, and bc interactions pretending that bc
and abc interactions do not exist.
3. Estimate and test under a model in which interactions sum to 0 and in which each of
the 5 missing abc and the one missing bc interactions are assumed = 0.
All of these clearly are biased methods, and their goodness depends upon the
closeness of the assumptions to the truth. If one is prepared to use biased estimators,
it seems more logical to me to attempt to minimize mean squared errors by using prior
values for average sums of squares and products of interactions. Some possibilities for our
example are:
1. Priors on abc and bc, the interactions associated with missing subclasses.
2. Priors on all interactions.
3. Priors on all interactions and on all main effects.
Obviously there are many other possibilities, e.g. priors on c and all interactions.
The first method above might have the greatest appeal since it results in biases due
only to bc and abc interactions. No method for estimating main effects exists that does
not contain biases due to these. But the first method does avoid biases due to main
effects, ab, and ac interactions. This method will be illustrated. Let , a, b, c, ab, ac
be treated as fixed. Consequently we have much confounding among them. The rank of
the submatrix of X0 X pertaining to them is 1 + (2-1) + (3-1) + (4-1) + (2-1)(3-1) +
(2-1)(4-1) = 12. We set up least squares equations with ab, ac, bc, and abc including
missing subclasses for bc and abc. The submatrix for ab and ac has order, 14 and rank,
12. Treating bc and abc as random results in a mixed model coefficient matrix with
order 50, and rank 48. The OLS coefficient matrix is in (18.13) to (18.18). The upper 26
26 block is in (18.13) to (18.15), the upper right 26 24 block is in (18.16) to (18.17),
and the lower 24 24 block is in (18.18).
16
0 0
11 0
7
0
0
0
0
3
0
0
0
0
0
0
0
0
3
0
0
7
0
0
3
0
0
0
7
0
0
0
0
0
14
5
0
0
2
0
0
0
5
0
0
0
2
0
0
0
0
0
15
2
0
0
5
0
0
0
0
2
0
0
0
5
6
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
13
0
5
0
0
6
0
5
0
0
0
6
0
0
3
5
5
0
0
0
13
0
2
0
0
2
0
0
2
0
0
0
2
0
10
5
2
2
0
0
0
0
9
0
0
0
0
4
0
0
0
0
0
0
0
4
2
0
0
0
0
0
0
0
2
0
4
0
0
3
0
0
0
0
4
0
0
0
6
4
0
0
0
0
0
0
0
10
0
0
5
0
0
3
5
0
0
0
3
0
0
0
0
2
0
0
4
0
2
0
0
0
4
0
0
0
0
7
6
3
0
0
0
0
16
0
0
0
0
0
6
0
0
0
0
0
0
6
0
0
0
2
2
4
0
0
0
0
0
8
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
4
6
0
0
0
0
0
0
15
(13)
(14)
3
0
0
0
0
0
3
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0 0 0
10 0 0
7 0
7
5
0
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
6
0
5
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
0
11
0
2
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
11
0
0
0
0
0
0
0
4
0
4
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
4
0
0
0
0
3
0
0
0
0
0
0
0
7
0
0
5
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
0
8
0
0
2
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
7
0
0
0
0
0
0
7
0
0
0
7
0
0
0
0
0
0
0
0
0
0
0
(15)
(16)
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
5
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
6
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
3
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
4
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
6
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
diag(3, 5, 2, 6, 5, 2, 0, 4, 5, 2, 0, 0, 7, 2, 5, 0, 6, 2, 4, 3, 3, 4, 6, 0).
(17)
(18)
K=
1
8
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
=
,
18.35
21.77
20.50
19.52
17.98
15.61
16.82
13.97
19.01
15.97
17.88
15.86
16.10
20.37
17.91
15.71
15.72
13.50
15.17
11.67
16.99
14.07
16.13
12.95
X X X
i
13
2
yijk.
/nijk .
Then we divide by n - the number of filled subclasses. Three reductions are needed to
2
2
estimate bc
and abc
. The easiest ones are probably
Red (full model) described above.
Red (ab,ac,bc).
Red (ab,ac).
Partition the OLS coefficient matrix as
(C1 C2 C3 ).
C1 represents the first 14 cols., C2 the next 12, and C3 the last 24. Then compute C2 C02
and C3 C03 . Let Q2 be the g-inverse of the matrix for Red (ab,ac,bc), which is the LS
coefficient matrix with rows (and cols.) 27-50 set to 0. Q3 is the g-inverse for Red (ab,ac),
which is the LS coefficient matrix with rows (and cols.) 15-50 set to 0. Then
2
2
E[Red (full)] = 19e2 + n(bc
+ abc
) + t,
2
2
2
E[Red (ab, ac, bc)] = 17e + nbc + trQ2 C3 C03 abc
+ t,
2
0 2
2
E[Red (ab, ac) = 12e + trQ3 C2 C2 bc + trQ3 C3 C03 abc
+ t.
t is a quadratic in the fixed effects. The coefficient of e2 is in each case the rank of the
coefficient matrix used in the reduction.
Mixed models could be of two general types, namely one factor fixed and two random
such as a fixed and b and c random, or with two factors fixed and one factor random, e.g.
a and b fixed with c random. In either of these we would need to consider whether the
populations are finite or infinite and whether the elements are related in any way. With
a and b fixed and c random we would have fixed ab interaction and random ac, bc, abc
interactions. With a fixed and b and c random all interactions would be random.
We also need to be careful about what we can estimate and predict. With a fixed
and b and c random we can predict elements of ab, ac, and abc only for the levels of
a in the experiment. With a and b fixed we can predict elements of ac, bc, abc only
for the levels of both a and b in the experiment. For infinite populations of b and c in
the first case and c in the second we can predict for levels of b and c (or c) outside the
experiment. BLUP of them is 0. Thus in the case with c random, a and b fixed, BLUP
of the 1,2,20 subclass when the number of levels of c in the experiment <20, is
o + ao1 + bo2 + abo12 .
In contrast, if the number of levels of c in the experiment >19, BLUP is
14
15
Chapter 19
Nested Classifications
C. R. Henderson
1984 - Guelph
The nested classification can be described as cross-classification with disconnectedness. For example, we could have a cross-classified design with the main factors being
sires and dams. Often the design is such that a set of dams is mated to sire 1 a second
2
set to sire 2, etc. Then d2 and ds
, dams assumed random, cannot be estimated sepa2
rately, and the sum of these is defined as d/s
. As is the case with cross-classified data,
estimability and methods of analysis depend upon what factors are fixed versus random.
We assume that the only possibilities are random within random, random within fixed,
and fixed within fixed. Fixed within random is regarded as impossible from a sampling
viewpoint.
with ti and aij fixed. The j subscript has no meaning except in association with some i
subscript. None of the ti is estimable nor are differences among the ti . So far as the aij
are concerned
X
X
a
for
j = 0 can be estimated.
j
ij
j
j
Thus we can estimate 2ai1 ai2 ai3 . In contrast it is not possible to estimate differences
between aij and agh (i 6= g) or between aij and agh (i 6= g, j 6= h). Obviously main effects
can be defined only as some averaging over the nested factors. Thus we could define the
P
P
mean of the ith main factor as i = ti + j kj aij where j kj = 1. Then the ith main
effect would be defined as i -
. Tests of hypotheses of estimable linear functions can
be effected in the usual way, that is, by utilizing the variance-covariance matrix of the
estimable functions.
Let us illustrate with the following simple example
i
V ar(
i )
2
1
1
4. e (4 + 5 )/4 = .1125 e2
7. e2 (1 + 101 + 21 )/9 = .177 e2
8.5 e2 (51 + 21 )/4 = .175 e2
Test
1 0 1
0 1 1
1 0 1
0 1 1
= 0
3.5
.5
=
e2 Var ()
e2 =
V ar(K0 )
with inverse
4.98283 2.47180
2.47180
4.06081
(3.5 .5)
3.5
.5
/2 = 26.70
Estimate e2 as
e2 = within subclass mean square. Then the test is numerator MS/
e2
with 2,26 d.f. A possible test of differences among aij could be
1 1 0 0
0 0
0
0
0 1 0 1 0
0
0
0 0 1 1 0
0
0
0 0 0
0 1 1
a,
V ar =
2.22222
0
0
0
0
.92308 .76923
0
0
.76923 2.30769
0
0
0
0
1.42857
S.S. for T =
S.S. for A =
XX
i
i
2
yij.
/nij
X
i
2
yi..
/ni. .
In our example,
MST = (1290.759 1192.966)/2 = 48.897.
MSA = (1304 1290.759)/1 = 13.24.
Note that the latter is the same as in the previous method. They do in fact test the same
hypothesis. But MST is different from the result above which tests treatments averaged
equally over the a nested within it. The second method tests differences among t weighted
over a according to the number of observations. Thus the weights for t1 are (4,5)/9.
To illustrate this test,
0
K =
.444444 .555555
0
0
0
.71429 .28572
0
0
.07692 .76923 .15385 .71429 .28572
0
V ar(K
yij )/e2
with inverse =
.253968 .142857
.142857 .219780
6.20690 4.03448
4.03448
7.17242
Then the MS is
(4.82540 1.79122)
6.20690 4.03448
4.03448
7.17242
4.82540
1.79122
/2 = 48.897
as in the regular ANOVA. Thus ANOVA weights according to the nij . This does not
appear to be a particularly interesting test.
3
There are two different sampling schemes that can be envisioned in the random nested
within fixed model. In one case, the random elements associated with every fixed factor
are assumed to be a sample from the same population. A different situation is one in
which the elements within each fixed factor are assumed to be from separate populations.
The first type could involve treatments as the fixed factors and then a random sample of
sires is drawn from a common population to assign to a particular treatment. In contrast,
if the main factors are breeds, then the sires sampled would be from separate populations,
namely the particular breeds. In the first design we can estimate the difference among
treatments, each averaged over the same population of sires. In the second case we would
compare breeds defined as the average of all sires in each of the respective breeds.
2.1
yij.
Treatments
1 2
3
7 6 - 7
- 9
- 8
Let us treat this first as a multiple trait problem with V ar(e) = 40I,
si1
3 2 1
V ar si2 = 2 4 2
,
si3
1 2 5
where sij refers to the value of the ith sire with respect to the j th treatment. Assume that
the sires are unrelated. The inverse is
3 2 1
2 4 2
1 2 5
.5
.25
0
1
80
50 20
0
0
0
0
0
0
0
0
20
35 10
0
0
0
0
0
0
0
0 10
20
0
0
0
0
0
0
0
0
0
0
44 20
0
0
0
0
0
0
0
0 20
35 10
0
0
0
0
0
0
0
0 10
20
0
0
0
0
0
0
0
0
0
0
40 20
0
0
0
0
0
0
0
0 20
41 10
0
0
0
0
0
0
0
0 10
20
0
0
0
0
0
0
0
0
0
0
40
0
0
0
0
0
0
0
0
0 20
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
10
0
0
4
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 10 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 4 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 6 0
0
0
0
0
0 0 0 0
20
0
0
0
0 0 0 0
51 10
0
0
0 0 16 0
10
20
0
0
0 0 0 0
0
0
40 20
0 0 0 0
0
0 20
35 10 0 0 0
0
0
0 10
30 0 0 10
0
0
0
0
0 14 0 0
16
0
0
0
0 0 22 0
0
0
0
0
10 0 0 10
(s11 , s12 , s13 , s21 , s22 , s23 , s31 , s32 , s33 ,
s41 , s42 , s43 , s51 , s52 , s53 , t1 , t2 , t3 )0
= [.175, 0, 0, .15, 0, 0, 0, .175, 0, 0, .225,
0, 0, 0, .2, .325, .4, .2]0 .
5
(1)
The solution is
(.1412, .0941, .0471, .1412, .0941, .0471, .0918, .1835,
.0918, .0918, .1835, .0918, 0, 0, 0, 1.9176, 1.5380, 1.600)0 .
(2)
Now if we treat this as a nested model, G = diag (3,3,4,4,5). Then the mixed model
equations are in (19.3).
1201
55
0
46
0
0
39
0
0
0
54
0 15 0 0
0 6 0 0
0 0 9 0
0 0 24 0
39 0 0 15
21 0 0
33 0
15
s1
s2
s3
s4
s5
t1
t2
t3
1201
21
18
21
27
24
39
48
24
(3)
The solution is
(.1412, .1412, .1835, .1835, 0, 1.9176, 1.5380, 1.6000)0 .
(4)
Note that s11 = s1 , s21 = s2 , s32 = s3 , s42 = s4 , s53 = s5 from the solution in
(19.2) and (19.4). Also note that tj are equal in the two solutions. The second method
is certainly easier than the first but it does not predict values of sires for treatments in
which they had no progeny.
2.2
Now we assume that we have a population of sires unique to each breed. Then the
first model of Section 19.2.1 would be useless. The second method illustrated would be
appropriate if sires were unrelated and s2 = 3,4,5 for the 3 breeds. If s2 were the same
for all breeds G = I5 s2 .
s
Is2 0
0
Id2 0
V ar
d = 0
.
2
e
0
0 Ie
Let us use the data of Section 19.2.1 but now let t refer to sires and s to dams. Suppose
e2 /s2 = 12, e2 /d2 = 10. Then the mixed model equations are in (19.5).
23
7 11
19 0
23
5
0
0
17
5
5
0
0
15
2
2
0
0
0
12
3
0
3
0
0
0
13
8
0
8
0
0
0
0
18
5
0
0
5
0
0
0
0
15
1
s1
s2
s3
d1
d2
d3
d4
d5
37
13
16
8
7
6
7
9
8
(5)
The solution is (l.6869, .0725, -.0536, -.0l89, -.ll98, .2068, .l6l6, -.2259, -.0227)0 . Note that
P
si = 0 and that 10 (Sum of d within ith sire)/12 = si .
Chapter 20
Analysis of Regression Models
C. R. Henderson
1984 - Guelph
A regression model is one in which Zu does not exist, the first column of X is a vector
of 1s, and all other elements of X are general (not 0s and 1s) as in the classification
model. The elements of X other than the first column are commonly called covariates
or independent variables. The latter is not a desirable description since they are not
variables but rather are constants. In hypothetical repeated sampling the value of X
remains constant. In contrast e is a sample from a multivariate population with mean
= 0 and variance = R, often Ie2 . Accordingly e varies from one hypothetical sample
to the next. It is usually assumed that the columns of X are linearly independent, that
is, X has full column rank. This should not be taken for granted in all situations, for
it could happen that linear dependencies exist. A more common problem is that near
but not complete dependencies exist. In that case, (X0 R1 X)1 can be quite inaccurate,
can be extremely large. Methods for
and the variance of some or all of the elements of
dealing with this problem are discussed in Section 20.2.
where
X =
1
1
..
.
w1
w2
..
.
1 wn
The most simple form of V ar(e) = R is Ie2 . Then the BLUE equations are
n w.
P 2
w.
wi
y.
P
wi yi
(1)
5 20
20 90
30
127
/e2 .
e2 .
y.
(wi w. )
yi
(2)
5 0
0 10
30
7
/e2 .
= 6, = .7.
) = .1e2 , Cov(
, ) = 0.
V ar(
) = .2e2 , V ar(
It is easy to verify that
=
w. . . These two alternative models meet the requirements
of linear equivalence, Section 1.5.
BLUP of a future y say y0 with wi = w0 is
+ w0 + e0 or
+ (w0 w)
+ e0 ,
where e0 is BLUP of e0 = 0, with prediction error variance, e2 . If w0 = 3, y0 would
be 5.3 in our example. This result assumes that future or ) have the same value as in
the population from which the original sample was taken. The prediction error variance
is
!
!
1.8 .4
1
(1 3)
e2 + e2 = 1.3 e2 .
.4
.1
3
Also using the second model it is
(1 1)
.2 0
0 .1
1
1
e2 + e2 = 1.3 e2
In the multiple regression model the first column of X is a vector of 1s, and there
are 2 or more additional columns of covariates. For example, the second column could
represent age in days and the third column could represent initial weight, while y represents final weight. Note that in this model the regression on age is asserted to be the same
for every initial weight. Is this a reasonable assumption? Probably it is not. A possible
modification of the model to account for effect of initial weight upon the regression of final
weight on age and for effect of age upon the regression of final weight on initial weight is
where
yi = + 1 w1 + 2 w2 + 3 w3 + ei ,
w3 = w1 w2 .
This model implies that the regression coefficient for y on w1 is a simple linear function
of w2 and the regression coefficient for y on w2 is a simple linear function of w1 . A model
like this sometimes gives trouble because of the relationship between columns 2 and 3
with column 4 of X . We illustrate with
X =
1
1
1
1
1
6
5
5
6
7
8
9
8
7
9
48
45
40
42
63
The elements of column 4 are the products of the corresponding elements of columns 2
and 3. The coefficient matrix is
29 41
171 238
339
238
1406
1970
11662
(3)
135.09
91.91 15.45
63.10 10.55
1.773
(4)
1
5.8
8.2
47.56
2
e
+ e2 = 1.203 e2
.359 .026
.
.359
The variances of the errors of prediction of the two predictors above would then be 1.20
and 7.23, the second of which is much smaller than when w3 is included. But if w3 6= 0,
the predictor is biased when w3 is not included.
Let us look at the solution when w3 is included and y0 = (6, 4, 8, 7, 5). The solution
is
(157.82, 23.64, 17.36, 2.68).
This is a strange solution that is the consequence of the large elements in (X0 X)1 . A
better solution might result if a prior is placed on w3 . When the prior is 1, we add 1 to
the lower diagonal element of the coefficient matrix. The resulting solution is
(69.10, 8.69, 7.16,
.967).
This type of solution is similar to ridge regression, Hoerl and Kennard (1970). There
is an extensive statistics literature on the problem of ill-behaved X0 X. Most solutions
to this problem that have been proposed are (1) biased (shrunken estimation) or (2)
dropping one or more elements of from the model with either backward or forward
type of elimination, Draper and Smith (1966). See for example a paper by Dempster et
al. (1977) with an extensive list of references. Also Hocking (1976) has many references.
Another type of covariate is involved in fitting polynomials, for example
yi = + xi 1 + x2i 2 + x3i 3 + x4i 4 + ei .
As in the case when covariates involve products, the sampling variances of predictors are
large when xi departs far from x. The numerator mean square with 1 d.f. can be computed
easily. For the ith i it is
i2 /ci+1 ,
where ci+1 is the i+1 diagonal of the inverse of the coefficient matrix. The numerator
can also be computed by reduction under the full model minus the reduction when i is
dropped from the solution.
Chapter 21
Analysis of Covariance Model
C. R. Henderson
1984 - Guelph
Consider a model
yijk = ri + cj + ij + w1ijk 1 + w2ijk 2 + eijk .
All elements of the model are fixed except for e, which is assumed to have variance, Ie2 .
The nijk , yijk. , w1ijk. , and w2ijk. are as follows
1
2
3
1
3
1
2
nijk
2
2
3
1
3
1
4
2
yijk.
2 3
9 4
20 24
7 8
1
20
3
13
1
8
6
7
w1ijk.
2 3
7 2
10 11
2 4
1
12
5
9
X X X
i
X X X
i
X X X
i
X X X
i
2
w1ijk
= 209,
2
w2ijk
= 373,
w2ijk.
2
11
15
2
3
4
14
7
Then the matrix of coefficients of OLS equations are in (21.1). The right hand side vector
is (33, 47, 28, 36, 36, 36, 20, 9, 4, 3, 20, 24, 13, 7, 8, 321, 433).
6 0 0 3
8 0 1
5 2
2
3
1
0
6
1
4
2
0
0
7
3
0
0
3
0
0
3
2
0
0
0
2
0
0
2
1
0
0
0
0
1
0
0
1
0
1
0
1
0
0
0
0
0
1
0
3
0
0
3
0
0
0
0
0
3
0
4
0
0
0
4
0
0
0
0
0
4
0
0
2
2
0
0
0
0
0
0
0
0
2
0
0
1
0
1
0
0
0
0
0
0
0
0
1
0
0
2
0
0
2
0
0
0
0
0
0
0
0
2
17 27
27 34
13 18
21 26
19 28
17 25
8 12
7 11
2
4
6
5
10 15
11 14
7
9
2
2
4
7
209 264
373
(1)
A g-inverse of the coefficient matrix can be obtained by taking a regular inverse with the
first 6 rows and columns set to 0. The lower 11 11 submatrix of the g-inverse is in
(21.2).
4
10
8685
7311
15009
5162
7135
15313
7449
9843
5849
25714
2866
3832
2430
5325
3582
2780
3551
11869
6690
9139
6452
9312
11695
4588
6309
4592
5718
5735
4012
5158
2290
9016
4802
6507
4422
7519
6002
6938
285
264
226
2401
356
569
704
654
6
767
6163
8357
5694
9581
7704
5686
12286
1148
1652
1441
262
1435
821
1072
281
1151
440
580
(2)
This gives a solution vector (0, 0, 0, 0, 0, 0, 7.9873, 6.4294, 5.7748, 2.8341, 8.3174, 6.8717,
7.6451, 7.2061, 5.3826, .6813, -.7843). One can test an hypothesis concerning interactions
by subtracting from the reduction under the full model the reduction when is dropped
from the model. This tests that all ij i. .j + .. are 0. The reduction under the full
model is 652.441. A solution with dropped is
(6.5808, 7.2026, 6.5141, 1.4134, 1.4386, 0, .1393, .5915).
This gives a reduction = 629.353. Then the numerator SS with 4 d.f. is 652.441 - 629.353.
The usual test of hypothesis concerning rows is that all ri + c. + i. are equal. This is
comparable to the test effected by weighted squares of means when there are no covariates.
We could define the test as all ri + c. + i. + 1 w10 + 2 w20 are equal, where w10 , w20 can
have any values. This is not valid, as shown in Section 16.6, when the regressions are not
homogeneous. To find the numerator SS with 2 d.f. for rows take the matrix
0
K
0
=
K
1 1 1 1 1 1
0
0
0
1 1 1
0
0
0 1 1 1
2.1683
.0424
is the solution under the full model with ro , co set to 0. Next compute K0 [first
where
9 rows and columns of (21.2)] K as
=
4.5929 2.2362
2.2362 4.3730
Then
numerator SS = (2.1683 .0424)
4.5929 2.2362
2.2362 4.3730
!1
2.1683
.0424
= 1.3908.
If we wish to test w1 , compute as the numerator SS, with 1 d.f., .6813 (.0767)1 .6813,
where
1 = .6813, V ar(
1 ) = .0767 e2 .
We found in Section 17.3 that the two way fixed model with interaction and with one
or more missing subclasses precludes obtaining the usual estimates and tests of main
effects and interactions. This is true also, of course, in the covariance model with missing
subclasses for fixed by fixed classifications. We illustrate with the same example as before
3
except that the (3,3) subclass is missing. The OLS equations are in (21.3). The right
hand side vector is (33, 47, 20, 36, 36, 28, 20, 9, 4, 3, 20, 24, l3, 7, 0, 307, 406)0 . Note
that the equation for 33 is included even though the subclass is missing.
6 0 0 3
8 0 1
3 2
2
3
1
0
6
1
4
0
0
0
5
3
0
0
3
0
0
3
2
0
0
0
2
0
0
2
1
0
0
0
0
1
0
0
1
0
1
0
1
0
0
0
0
0
1
0
3
0
0
3
0
0
0
0
0
3
0
4
0
0
0
4
0
0
0
0
0
4
0
0
2
2
0
0
0
0
0
0
0
0
2
0
0
1
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
17 27
27 34
9 11
21 26
19 28
13 18
8 12
7 11
2
4
6
5
10 15
11 14
7
9
2
2
0
0
199 249
348
(3)
(4)
The matrix of estimated mean squared errors obtained by pre and post multiplying
4
1 0 0 1 0 0 1 0 0 0 0 0 0 0
..
0
L =
.
0 0 1 0 0 1 0 0 0 0 0 0 0 0
0 0 0
..
1 0 0
is in (21.5).
1
10, 000
8778
7287
13449
5196
6769
13215
8100
8908
4831
22170
6720
8661
5606
9676
11201
4895
6017
4509
7524
6007
6846
6151
7210
4931
9408
7090
5423
11244
3290
4733
2927
4825
4505
3214
4681
10880
2045
2788
7191
1080
2540
3885
5675
6514
45120
(5)
4.4081 1.6404
9.2400
!1
1.4652
2.0434
= 1.2636.
9
3
2
4
34
3
3
0
0
6
2 4 34
0 0
6
2 0
8
0 4 20
8 20 144
t1
t2
t3
68
18
10
40
276
(6)
Note that equations 2,3,4 sum to equation 1 and also (2 4 5) times these equations gives
the last equation. Accordingly the coefficient matrix has rank only 3, the same as if there
were no covariate. A solution is (0,6,5,10,0).
If t is random, there is no problem of estimability for then we need only to look at
the rank of
!
9 34
,
34 144
and that is 2. Consequently and are both estimable, and of course t is predictable. Let
us estimate t2 and e2 by Method 3 under the assumption V ar(t) = It2 , V ar(e) = Ie2 .
For this we need yy, reduction under the full model, and Red (, ).
y0 y = 601, E(y0 y) = 9 e2 + 9 t2 + q.
Red (full) = 558, E() = 3 e2 + 9 t2 + q.
Red (, ) = 537.257, E() = 2 e2 + 6.6 t2 + q.
t2 = 5.657 or a ratio of 1.27.
q is a quadratic in , . This gives estimates
e2 = 7.167,
Let us use 1 as a prior value of e2 /t2 and estimate t2 by MIVQUE given that e2
= 7.167. We solve for t having added 1 to the diagonal coefficients of equations 2,3,4 of
(21.6). This gives an inverse,
.94958
.15126 .10084
.35294
.54622
.30252 .05882
.79832 .29412
.27941
(7)
The solution is (3.02521, .55462, -1.66387, 1.10924, 1.11765). From this t0t = 4.30648.
To find its expectation we compute
.01483 .04449
.02966
0
.13347 .08898
tr(Ct [matrix (21.6)] Ct ) = tr
= .20761,
.05932
.03559 .10677
.07118
0
0
0
.32032 .21354
tr(Ct W Zt Zt WCt ) = tr
= .49827,
.14236
the coefficient of t2 in E(t0t). W0 Zt is the submatrix composed of cols. 2-4 of (21.6).
This gives
t2 = 5.657 or
e2 /
t2 = 1.27. If we do another MIVQUE estimation of t2 , given
e2 = 7.167 using the ratio, 1.27, the same estimate of t2 is obtained. Accordingly we
have REML of t2 , given e2 . Notice also that this is the Method 3 estimate.
If t were actually fixed, but we use a pseudo-variance in the mixed model equations
we obtain biased estimators. Using
e2 /
t2 = 1.27,
Random Regressions
It is reasonable to assume that regression coefficients are random in some models. For
example, suppose we have a model,
yij = + ci + wij i + eij ,
where yij is a yield observation on the j th day for the ith cow, wij is the day, and i is a
regression coefficient, linear slope of yield on time. Linearity is a reasonable assumption
for a relatively short period following peak production. Further, it is obvious that i is
different from cow to cow, and if cows are random, i is also random. Consequently we
should make use of this assumption. The following example illustrates the method. We
have 4 random cows with 3,5,6,4 observations respectively. The OLS equations are in
(21.8).
18 3 5 6
3 0 0
5 0
4 10
0 10
0 0
0 0
4 0
38
30 19
0 0
30 0
0 19
0 0
0 0
190 0
67
26
0
0
0
26
0
0
0
182
o
co
=
o
90
14
18
26
32
51
117
90
216
(8)
10 = w1. ,
30 = w2. , etc.
X
38 =
w2 , etc.
j ij
51 =
X
j
First let us estimate e2 , c2 , 2 by Method 3. The necessary reductions and their expectations are
y0 y
Red (full)
Red (, t)
Red (, )
18
8
4
5
18
18
18
16.9031
477
477
442.5
477
e2
c2
2
1
1
1
1
18 2 .
The reductions are (538, 524.4485, 498.8, 519.6894). This gives estimates
e2 = 1.3552,
2
2
2
2
2
2
= 2.311, the
e /
c = 2.143 and
e /
= .5863. Using the resulting ratios,
c = .6324,
mixed model solution is
(2.02339, .11180, .36513, .09307,
.34639, .73548, .34970, .76934, .83764).
Covariance models are discussed also in Chapter 16.
Chapter 22
Animal Model, Single Records
C. R. Henderson
1984 - Guelph
We shall describe a number of different genetic models and present methods for
BLUE, BLUP, and estimation of variance and covariance components. The simplest
situation is one in which we have only one trait of concern, we assume an additive genetic
model, and no animal has more than a single record on this trait. The scalar model, that
is, the model for an individual record, is
0
yi = xi + zi u + ai + ei .
represents fixed effects with xi relating the record on the ith animal to this vector.
u represents random effects other than breeding values and zi relates this vector to yi .
ai is the additive genetic value of the ith animal.
ei is a random error associated with the individual record.
The vector representation of the entire set of records is
y = X + Zu + Za a + e.
(1)
=
=
=
=
=
=
G.
Aa2 .
R, usually Ie2 .
0,
0,
0.
0 1
1
Z0 R1 Za
ZR X ZR Z+G
0
0
0
1
1
1
1
2
Za R X Za R Z
Za R Za + A /a
X0 R1 y
o
0 1
u
= Z R y .
0
a
Za R1 y
(2)
If Za = I, (22.2) simplifies to
o
X0 R1 y
X0 R1 X X0 R1 Z
X0 R1
0 1
0 1
0 1
1
0 1
= Z R y .
ZR
u
ZR X ZR Z+G
R1 X
R1 Z
R1 + A1 /a2
a
R1 y
(3)
0
Z0
u
= Z0 y .
Z X Z0 Z + G1 e2
X
Z
I + A1 e2 /a2
a
y
(4)
If the number of animals is large, one should, of course, use Hendersons method (1976) for
computing A1 . Because this method requires using a base population of non-inbred,
unrelated animals, some of these probably do not have records. Also we may wish to
evaluate some progeny that have not yet made a record. Both of these circumstances will
will contain predicted breeding values of these animals without
result in Za 6= I, but a
records.
We illustrate the model above with 5 pairs of dams and daughters, the dams records being
made in period 1 and the daughters in period 2. Ordering the records within periods and
with record 1 being made by the dam of the individual making record 6, etc.
X
1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1
Za = I,
y0 = [5, 4, 3, 2, 6, 6, 7, 3, 5, 4].
!
I5 .5I5
A =
,
.5I5 I5
R = I10 e2 .
The sires and dams are all unrelated. We write the mixed model equations with e2 /a2
assumed to be 5. These equations are
5
0
1
1
1
1
1
0
0
0
0
0
0 1
5 0
0
0
0
0
0
1
1
1
1
1
1
0
23
3
10
3
1 1
0 0
I5
I5
1
0
10
3
23
3
0 0 0 0 0
1 1 1 1 1
I5
I5
p1
p2
=
a
20
25
5
4
3
2
6
6
7
3
5
4
(5)
.24
.02
.04
.04
.04
.04
.04
.02
.02
.02
.02
.02
.02 .04 .04 .04 .04 .04 .02 .02 .02 .02 .02
.24 .02 .02 .02 .02 .02 .04 .04 .04 .04 .04
.02
.02
.02
P
Q
.02
.02
.04
.04
.04
Q
P
.04
.04
(6)
= y (X Za )
e
0 e
= 13.94689.
e
V ar(
e) = (I WCW0 )(I WCW0 ) e2
+ (I WCW0 )A(I WCW0 ) a2 .
W = (X Za ),
C = matrix of (22.6).
0
) = tr(V ar(
E(
ee
e)) = 5.67265 e2 + 5.20319 a2
0 = Ca W0 y,
a
where Ca = last 10 rows of C.
0
0
V ar(
a) = Ca W0 WCa e2 + Ca W0 AWCa a2 .
) = tr(A1 V ar(
E(
a0 A1 a
a)) = .20813 e2 + .24608 a2 .
0 A1 a
= .53929.
a
a2 = .50.
e2 = 2.00,
a2 we obtain
Using these quadratics and solving for
e2 ,
The same estimates are obtained for any prior used for e2 /a2 . This is a consequence
of the fact that we have a balanced design. Therefore the estimates are truly BQUE and
also are REML. Further the traditional method, daughter-dam regression, gives the same
estimates. These are
a2 = .75619,
e2 = 2.90022.
When e2 /a2 is assumed equal to 5, the results are
0 e
= 16.83398 with expectation 4.9865 e2 + 4.6311 a2 ,
e
0 A1 a
0 = .66973 with expectation .191075 e2 + .214215 a2 .
a
Then
a2 = .67132,
e2 = 2.7524.
Chapter 23
Sire Model, Single Records
C. R. Henderson
1984 - Guelph
A simple sire model is one in which sires, possibly related, are mated to a random
sample of unrelated dams, no dam has more than one progeny with a record, and each
progeny produces one record. A scalar model for this is
0
(1)
represents fixed effects with xij relating the j th progeny of the ith sire to these effects.
si represents the sire effect on the progeny record.
u represents other random factors with zij relating these to the ij th progeny record.
eij is a random error.
The vector representation is
y = X + Zs s + Zu + e.
(2)
V ar(s) = As2 , where A is the numerator relationship of the sires, and s2 is the sire
variance in the base population. If the sires comprise a random sample from this
population s2 = 41 additive genetic variance. Some columns of Zs will be null if s contains
sires with no progeny, as will usually be the case if the simple method for computation of
A1 requiring base population animals, is used.
V ar(u) = G, Cov(s, u0 ) = 0.
V ar(e) = R, usually = Ie2 .
Cov(s, e0 ) = 0, Cov(u, e0 ) = 0.
If sires and dams are truly random,
Ie2 = .75I (additive genetic variance)
+ I (environmental variance).
0 1
s
Zs R X Zs R1 Zs + A1 s2 Zs R1 Z
=
0 1
0 1
0 1
1
u
Z R X Z R Zs
ZR Z+G
X0 R1 y
0 1
Zs R y .
Z0 R1 y
(3)
0
2
1 2
s = Zs y .
Zs X Zs Zs + A e /s Zs Z
0
0
0
2 1
u
Z0 y
Z X Z Zs
Z Z + e G
(4)
Sires
1
2
3
nij
Herds
2 3
5 0
8 4
2 6
1
3
0
4
yij.
Herds
4 1 2 3
0 25 34
0 74 31
8 23 11 43
73
1.0
.5 .5
1.0 .25
.
1.0
A =
h is fixed.
The ordinary LS equations are
0
12
0 3
0 0
20 4
7
5
8
2
0
15
0
4
6
0
0
10
0
0
8
0
0
0
8
59
105
150
48
119
74
73
(5)
28 8 8 3
28
0 0
36
4
5
8
2
0
15
0
4
6
0
0
10
0
0
8
0
0
0
8
59
105
150
48
119
74
73
(6)
.0764
.0436
.0432
.0574
.0545
.0434
.0432
.0436
.0712
.0320
.0370
.0568
.0477
.0320
.0593
.2014
.0468
.0504
.0593
.
.0410
.0468
.1206
.0473
.0410
.0556
.0504
.0473
.1524
.0556
.0714
.0593
.0410
.0556
.1964
(7)
The solution is
s0 = (.036661, .453353, .435022),
0 = (7.121439, 7.761769, 7.479672, 9.560022).
h
Let us estimate e2 from the residual mean square using OLS reduction, and e2 by
MIVQUE type computations. A solution to the OLS equations is
[10.14097, 11.51238, 9.12500, 2.70328, 2.80359, 2.67995, 0]
This gives a reduction in SS of 2514.166.
y0 y = 2922.
Then
e2 = (2922 - 2514.166)/(40-6) = 11.995. MIVQUE requires computation of s0 A1s
and equating to its expectation.
A1
5 2 2
1
4
0
=
2
.
3
2
0
4
s0 A1s = .529500.
Var (RHS of mixed model equations) = [Matrix (23.5)] e2 +
8 0 0
0 12 0
0 0 20
8 0 0 3 5 0 0
2
3 0 4
0 0 8 4 0
A 0 12
s .
5 8 2
0 0 20 4 2 6 8
0 4 6
0 0 8
3
64
48
144
80 40 80 40 32
60 30 132 66 24
37 56 43 44
s2 .
151 83
5
64 56
64
(8)
2
.007677 .006952
V ar(s) =
e
.007599
s .
.052193
(9)
Then E(s0 A1s) = tr(A1 [matrix (23.9)]) = .033184 e2 + .181355 s2 . With these results
e2 as 11.995. This is an approximate
we solve for
s2 and this is .7249 using estimated
MIVQUE solution because
e2 was computed from the residual of ordinary least squares
reduction rather than by MIVQUE.
Chapter 24
Animal Model, Repeated Records
C. R. Henderson
1984 - Guelph
In this chapter we deal with a one trait, repeated records model that has been extensively used in animal breeding, and particularly in lactation studies with dairy cattle.
The assumptions of this model are not entirely realistic, but may be an adequate approximation. The scalar model is
0
(1)
represents fixed effects, and xij relates the j th record of the ith animal to elements of
.
0
u represents other random effects, and zij relates the record to them.
ci is a cow effect. It represents both genetic merit for production and permanent
environmental effects.
eij is a random error associated with the individual record.
The vector representation is
y = X + Zu + Zc c + e.
(2)
V ar(u) = G,
V ar(c) = I c2 if cows are unrelated, with c2 = a2 + p2
= A a2 + I p2 if cows are related,
where p2 is the variance of permanent environmental effects, and if there are non-additive
genetic effects, it also includes their variances. In that case I p2 is only approximate.
V ar(e) = I e2 .
Cov(u, a0 ), Cov(u, e0 ), and Cov(a, e0 ) are all null. For the related cow model let
Zc c = Zc a + Zc p.
(3)
It is advantageous to use this latter model in setting up the mixed model equations, for
then the simple method for computing A1 can be used. There appears to be no simple
method for computing directly the inverse of V ar(c).
X0 X
X0 Z
X0 Zc
0
0
2 1
Z0 Zc
Z X Z Z + e G
2
0
0
0
Zc Zc + A1 e2
Zc Z
Zc X
0
Zc X
Zc Z
X0 Zc
Z0 Zc
0
Zc Zc
Zc Zc + I e2
Zc Zc
X0 y
Z0 y
0
Zc y
0
Zc y
(4)
These equations are easy to write provided G1 is easy to compute, G being diagonal, e.g.
0
as is usually the case. A1 can be computed by the easy method. Further Zc Zc + Ie2 /p2
can be absorbed easily. In fact, one would not need to write the p
is diagonal, so p
0
2 1
equations. See Henderson (1975b). Also Z Z+e G is sometimes diagonal and therefore
can be absorbed easily. If predictions of breeding values are of primary interest, a
is
u
what is wanted. If, in addition, predictions of real producing abilities are wanted, one
. Note that by subtracting the 4th equation of (24.4) from the 3rd we obtain
needs p
A1
e2 /a2
I
a
e2 /p2
= 0.
p
Consequently
=
p
p2 /a2
,
A1 a
(5)
(6)
I+
p2 /a2
A1
.
a
1 .25 .125
1
.5
1
3 0 0 1
3 0 1
2 1
1
1
0
0
2
0
1
1
0
0
2
1
0
0
0
0
0
1
1
1
1
3
0
0
0
3
1
1
0
0
2
0
0
0
2
0
1
1
0
0
2
0
0
0
2
1
0
0
0
0
0
1
0
0
0
1
t
a
=
p
12
18
17
19
8
16
4
19
8
16
4
(7)
Note that the last 4 equations are identical to equations 4-7. Thus a and p are confounded
in a fixed model. Now we add 2.2 A1 to the 4-7 diagonal block of coefficients and 2.75
I to the 8-11 diagonal block of coefficients. The resulting coefficient matrix is in (24.8).
2.2 = .55/.25, and 2.75 = .55/.2.
3.0
0
3.0
0
0
2.0
1.0
1.0
0
1.0
1.0
1.0
1.0
0
1.0
0
1.0
0
7.2581 1.7032 .9935 1.4194
5.0280 .1892
.5677
5.3118 1.1355
4.4065
1.0
1.0
1.0
3.0
0
0
0
5.75
1.0
1.0
0
0
2.0
0
0
0
4.75
0
1.0
1.0
0
0
2.0
0
0
0
4.75
1.0
0
0
0
0
0
1.0
0
0
0
3.75
(8)
943
306
205
303
215
126
60
152
26
414
227
236
225 64
24
26
15
390
153
107
0
64
31
33
410
211
14
37 53
2
406
3
48
2 42
261
38
41
24
286
21
18
290
12
310
(9)
The solution is
t0 = (4.123 5.952 8.133),
0 = (.065, .263, .280, .113),
a
0 = (.104, .326, .285, .063).
p
We next estimate e2 , a2 , p2 , by MIVQUE with the priors that were used in the above
0
mixed model solution. The Zc W submatrix for both a and p is
1
1
0
1
1
1
1
0
1
0
1
0
3
0
0
0
0
2
0
0
0
0
2
0
0
0
0
1
3
0
0
0
0
2
0
0
0
0
2
0
0
0
0
1
(10)
The variance of the right hand sides of the mixed model equations contains
0
W0 Zc AZc W a2 , where W = (X Z Zc Zc ). The matrix of coefficients of a2 is in (24.11).
0
V ar(r) also contains W0 Zc Zc W p2 and this matrix is in (24.12). The coefficients of e2
are in (24.7).
3.0 4.5
9.0
3.25
3.5
1.5
3.0
4.0
2.5
3.5
3.0
3.0
1.0
4.0
1.65
1.13
1.0
1.5
.25
1.0
1.0
6.0
6.0
4.5
9.0
3.0
3.0
1.5
9.0
3.25
3.5
1.5
3.0
4.0
1.0
.25
3.0
4.0
2.5
3.5
3.0
3.0
1.0
4.0
1.0
3.0
1.0
4.0
1.63
1.13
1.0
1.5
.25
1.0
1.0
1.5
.25
1.0
1.0
(11)
3 2 1 3
3 2 3
2 3
2
2
0
0
4
0
2
2
0
0
4
1
0
0
0
0
0
1
3
3
3
9
0
0
0
9
2
2
0
0
4
0
0
0
4
0
2
2
0
0
4
0
0
0
4
1
0
0
0
0
0
1
0
0
0
1
(12)
Now V ar(
a) contains Ca (V ar(r))C0a a2 , where Ca is the matrix formed by rows 4-9 of the
0
matrix in (24.9). Then Ca (V ar(r))Ca is
.0168 .0012 .0061
.0012
.0236
.0160
.0274
.0331
.0136
.0310
a2
(13)
p2
(14)
.0219
.0042 e
.0252
(15)
0 A1 a
0 = .2067. The expectation of this is
We need a
trA1 [matrix (24.13) + matrix (24.14) + matrix (24.15)]
= .1336 e2 + .1423 a2 + .2216 p2 .
To find V ar(
p) we use Cp , the last 6 rows of (24.9).
.0429 .0135 .0223 .0071
V ar(
p) =
.0337
.0040
.0197
.0586 .0014
.0298
a2
(16)
p2
(17)
.0374 .0133
.0341
e2 .
(18)
0p
= .2024 with expectation
We need p
tr[matrix (24.16) + matrix (24.17) + matrix (24.18)]
= .1498 e2 + .1419 a2 + .2588 p2 .
0 e
.
We need e
= [I WCW0 ]y,
e
where C = matrix (24.9), and I WCW0 is
.4548 .1858
.1113 .1649
.0536
.0289 .0289
.4079
.0104
.0466
.0570
.0337
.0337
.4620
.2073
.0238
.0238
.4647
.1390 .1390
.3729 .3729
.3729
Then
e
e
e
V ar(
e)
V ar(y)
0
=
=
=
=
1 .5 .5
1. .5
1. .125 .5 1.
1.
.5 .125
1. .5
=
1.
.5
.25
.5
.5
.25
1.
1 0 0 1
1 0 0
1 0
0
1
0
0
1
0
0
0
0
0
1
1
0
0
1
0
0
1
0
0
0
0
0
1
0
1
6
1.
.5
.5
1.
.5
.5
1.
.5
.25
.5
.5
.25
1.
.5
1.
a2
p2 + Ie2 .
(19)
Chapter 25
Sire Model, Repeated Records
C. R. Henderson
1984 - Guelph
This chapter is a combination of those of Chapters 23 and 24. That is, we are
concerned with progeny testing of sires, but some progeny have more than one record.
The scalar model is
yijk = x0ijk + z0ijk u + si + pij + eijk .
u represents random factors other than s and p. It is assumed that all dams are unrelated
and all progeny are non-inbred. Under an additive genetic model the covariance between
any record on one progeny and any record on another progeny of the same sire is s2 =
1
h2 y2 if sires are a random sample from the population. The covariance between
4
any pair of records on the same progeny is s2 + p2 = ry2 . If sires are unselected,
p2 = (r 41 h2 )y2 , e2 = (1 r)y2 , s2 = 41 h2 y2 .
In vector notation the model is
y = X + Zu + Zs s + Zp p + e.
V ar(s) = A s2 , V ar(p) = I p2 , V ar(e) = I e2 .
With field data one might eliminate progeny that do not have a first record in order to
reduce bias due to culling, which is usually more intense on first than on later records.
Further, if a cow changes herds, the records only in the first herd might be used. In this
case useful computing strategies can be employed. The data can be entered by herds, and
0
p easily absorbed because Zp Zp + Ie2 /p2 is diagonal. Once this has been done, fixed
effects pertinent to that particular herd can be absorbed. These methods are described in
detail in Ufford et al. (1979). They are illustrated also in a simple example which follows.
We have a model in which the fixed effects are herd-years. The observations are
displayed in the following table.
Sires Progeny 11
1
1
5
2
5
3
4
5
6
7
2
8
7
9
10
11
12
13
14
Herd - years
12 13 21 22 23
6 4
8
9 4
5 6 7
4 5
4 3
2
6
5 4
9
4
3 7 6
5 6
5
24
3
8
8
4
Ordering the solution vector hy, s, p the matrix of coefficients of OLS equations is in
(25.1), and the right hand side vector is (17, 43, 16, 12, 27, 29, 23, 88, 79, 15, 13, 13, 21,
9, 7, 10, 13, 9, 9, 4, 16, 19, 9)0 .
X0 X = diag (3, 6, 4, 3, 5, 6, 4)
0
Zs Zs = diag (17, 14)
0
Zp Zp = diag (3, 2, 2, 4, 2, 2, 2, 2, 2, 1, 1, 3, 3, 2)
0
Zs X =
0
Zs Zp =
2 3 2 2 3 3 2
1 3 2 1 2 3 2
3 2 2 4 2 2 2 0 0 0 0 0 0 0
0 0 0 0 0 0 0 2 2 1 1 3 3 2
Zp X =
1
1
0
0
0
0
0
1
0
0
0
0
0
0
1
1
1
0
0
0
0
1
1
1
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
0
0
0
0
0
1
1
1
0
0
0
0
0
1
1
0
0
0
0
1
0
1
1
0
0
0
0
1
1
1
0
0
0
1
0
0
1
0
0
0
0
0
1
1
(1)
4.191 .811
2.775
0
0
0
0
0
0
0
0
0
0
0
0
2.297 .703 .411 .184
3.778 .930 .411
4.486 .996
3.004
c
hy
s
6.002
21.848
4.518
1.872
10.526
9.602
9.269
31.902
31.735
3
.736
.415
1.151
1.417
.736
1.002
.677
.321
1.092
.642
1.092
1.057
.677
.736
15.549 2.347
14.978
c and
The solution for hy
s are the same as before.
c can be absorbed
If one chooses, and this would be mandatory in large sets of data, hy
c are in block diagonal form. When hy
c is
herd by herd. Note that the coefficients of hy
absorbed, the equations obtained are
12.26353 5.222353
5.22353
12.26353
s1
s2
1.1870
1.1870
The solution is approximately the same for sires as before, the approximation being due
to rounding errors.
Chapter 26
Animal Model, Multiple Traits
C. R. Henderson
1984 - Guelph
No Missing Data
In this chapter we deal with the same model as in Chapter 22 except now there are 2 or
more traits. First we shall discuss the simple situation in which every trait is observed
on every animal. There are n animals and t traits. Therefore the record vector has nt
elements, which we denote by
y0 = [y10 y20 . . . yt0 ].
y10 is the vector of n records on trait 1, etc. Let the model be
y1
y2
..
.
X1
0 ... 0
0 X2 . . . 0
..
..
..
.
.
.
0
0 . . . Xt
yt
I
0
..
.
0 ...
I ...
..
.
0
0
..
.
0 0 ... I
1
2
..
.
t
a1
a2
..
.
e1
e2
..
.
(1)
et
at
(2)
Every Xi has n rows and pi columns, the latter corresponding to i with pi elements.
Every I has order n x n, and every ei has n elements.
V ar
a1
a2
..
.
at
= G.
(3)
gij represents the elements of the additive genetic variance-covariance matrix in a noninbred population.
V ar
e1
e2
..
.
et
= R.
(4)
..
..
.
=
.
.
A1 g 1t . . . A1 g tt
G1
(5)
g ij are the elements of the inverse of the additive genetic variance covariance matrix.
Ir11 . . . Ir1t
.
..
=
.
.
..
Ir1t . . . Irtt
R1
(6)
rij are the elements of the inverse of the environmental variance-covariance matrix. Now
the GLS equations regarding a fixed are
X0 1 X1 r11 . . .
.
.
.
X0 1 Xt r1t
..
.
X0 t X1 r1t
X1 r11
..
.
. . . X0 t Xt rtt
. . . Xt r1t
..
.
X0 t r1t
Ir11
..
.
X1 r1t
. . . Xt rtt
Ir1t
X0 1 y1 r11 + . . . +
..
.
X0 1 r1t
o1
.
..
.
.
.
o
. . . X0 t rtt
t
1t
1
. . . Ir
..
..
.
.
X0 1 r11 . . .
..
.
X0 1 yt r1t
..
X0 t yt rtt
yt r1t
..
X0 t y1 r1t
y1 r11
..
.
+ ... +
+ ... +
y1 r1t
+ . . . + yt rtt
. . . Irtt
t
a
(7)
The mixed model equations are formed by adding (26.5) to the lower t2 blocks of (26.7).
If we wish to estimate the gij and rij by MIVQUE we take prior values of gij and rij
and e
needed for
for the mixed model equations and solve. We find that quadratics in a
MIVQUE are
0i A1 a
j for i = l, . . . , t; j = i, . . . , t.
a
(8)
0i e
j for i = l, . . . , t; j = i, . . . , t.
e
2
(9)
X0 1 Ar1k r1k . . .
..
.
X0 1 Ar1k rtk
..
X0 t Artk r1k
Ar1k r1k
..
.
. . . X0 t Artk rtk
. . . Ar1k rtk
..
.
Artk r1k
. . . Artk rtk
(10)
The ij th sub-block of the upper left set of t t blocks is X0 i AXj rik rjk . The sub-block of
the upper right set of t t blocks is X0 i Arik rjk . The sub-block of the lower right set of
t t blocks is Arik rjk .
The matrix for gkm is
P T
T0 S
(11)
where
2X0 1 AX1 r1k r1m
...
.
P =
..
X0 t AX1 (r1k rtm + r1m rtk ) . . .
...
2X0 1 Ar1k r1m
.
T =
..
X0 t A(r1k rtm + r1m rtk ) . . .
..
,
.
0
tk tm
2X t AXt r r
..
,
.
0
tk tm
2X t Ar r
and
2Ar1k r1m
...
.
S =
..
A(r1k rtm + r1m rtk ) . . .
..
.
.
2Artk rtm
(12)
(13)
(14)
The matrix for rkk is the same as (26.10) except that I replaces A. Thus the 3 types
of sub-blocks are X0 i Xj rik rjk , X0 i rik rjk , and Irik rjk . The matrix for rkm is the same as
(26.11) except that I replaces A. Thus the 3 types of blocks are X0 i Xj (rik rjm + rim rjk ),
X0 i (rik rjm + rim rjk ), and I(rik rjm + rim rjk ).
Now define the p + 1, . . . , p + n rows of a g-inverse of mixed model coefficient matrix
as C1 , the next n rows as C2 , etc., with the last n rows being Ct . Then
V ar(
ai ) = Ci [V ar(r)]C0i ,
(15)
where V ar(r) = variance of right hand sides expressed as matrices multiplied by the gij
and rij as described above.
0j ) = Ci [V ar(r)]C0j .
Cov(
ai , a
(16)
0i ) = trA1 V ar(
E(
ai A1 a
ai ).
(17)
0j ) = trA1 Cov(
0j ).
E(
ai A1 a
ai , a
(18)
Then
(19)
(20)
Let the first n rows of (26.19) be denoted B1 , the next n rows B2 , etc. Also let
Bi (Bi1 Bi2 . . . Bit ).
(21)
1 is symmetric and
Each Bij has dimension n n and is symmetric. Also I WCW0 R
as a consequence Bij = Bji . Use can be made of these facts to reduce computing labor.
Now
i = Bi y (i = 1, . . . , t).
e
V ar(
ei ) = Bi [V ar(y)]B0i .
j ) = Bi [V ar(y)]B0j .
Cov(
ei , e
(22)
(23)
(24)
t
X
B2ik rkk +
k=1
t
X
t1
X
t
X
k=1 m=k+1
t1
X
k=1
t
X
k=1 m=k+1
(25)
j ) =
Cov(
ei , e
t
X
k=1
+
+
+
t1
X
t
X
k=1 m=k+1
t
X
k=1
t1
X
t
X
(26)
k=1 m=k+1
i ) = trV ar(
E(
e0i e
ei ).
0
j ) = trCov(
0j ).
E(
ei e
ei , e
(27)
(28)
Note that only the diagonals of the matrices of (26.25) and (26.26) are needed.
Missing Data
When data are missing on some traits of some of the animals, the computations are more
difficult. An attempt is made in this section to present algorithms that are efficient for
computing, including strategies for minimizing data storage requirements. Henderson and
Quaas (1976) discuss BLUP techniques for this situation.
The computations for the missing data problem are more easily described and carried
out if we order the records, traits within animals. It also is convenient to include missing
data as a dummy value = 0. Then y has nt elements as follows:
y0 = (y10 y20 . . . yn0 ),
where yi is the vector of records on the t traits for the ith animal. With no missing data
y11
y12
..
.
y1t
y21
y22
..
.
y2t
..
.
yn1
yn2
..
.
ynt
x011 0
0 x012
..
..
.
.
0
0
x021 0
0 x022
..
..
.
.
0
..
.
0
..
.
...
...
...
...
...
...
0
0
..
.
x01t
..
0
x2t
..
..
x0n1 0 . . .
0 x0n2 . . .
..
..
.
.
0
0 . . . x0nt
..
+
.
a11
a12
..
.
a1t
a21
a22
..
.
a2t
..
.
an1
an2
..
.
ant
e11
e12
..
.
e1t
e21
e22
..
.
e2t
..
.
en1
en2
..
.
ent
x0ij is a row vector relating the record on the j th trait of the ith animal to j , the fixed
P
effects for the j th trait. j has pj elements and j pj = p. When a record is missing,
it is set to 0 and so are the elements of the model for that record. Thus, whether data
are missing or not, the incidence matrix has dimension, nt by (p + nt). Now R has block
diagonal form as follows.
R1 0
... 0
R2 . . . 0
0
R = ..
(29)
..
..
.
.
.
.
0
. . . Rn
For an animal with no missing data, Ri is the t t environmental covariance matrix. For
an animal with missing data the rows (and columns) of Ri pertaining to missing data are
set to zero. Then in place of R1 ordinarily used in the mixed model equations, we use
R which is
R
0
1
.
(30)
...
R
n
R
1 is the zeroed type of g-inverse described in Section 3.3. It should be noted that Ri is
the same for every animal that has the same missing data. There are at most t2 1 such
unique matrices, and in the case of sequential culling only t such matrices corresponding
to trait 1 only, traits 1 and 2 only, . . . , all traits. Thus we do not need to store R and
R but only the unique types of R
i .
V ar(a) =
,
..
..
..
.
.
.
a1n G0 a2n G0 . . . ann G0
(31)
[V ar(a)]1 =
a11 G1
a12 G1
. . . a1n G1
0
0
0
22 1
2n 1
a12 G1
a
G
.
.
.
a
G
0
0
0
..
..
..
.
.
.
2n 1
nn 1
a1n G1
a
G
.
.
.
a
G
0
0
0
(32)
aij are the elements of the inverse of A. Note that all nt of the aij are included in the
mixed model equations even though there are missing data.
We illustrate prediction by the following example that includes 4 animals and 3 traits
with the j vector having 2, 1, 2 elements respectively.
Animal
1
2
3
4
1
5
2
2
Trait
2 3
3 6
5 7
3 4
- -
1 2
X for 1 = 1 3
,
1 4
for 2 =
1 ,
1
1 3
and for 3 = 1 4 .
1 2
1
0
0
1
0
0
0
0
0
1
0
0
2
0
0
3
0
0
0
0
0
4
0
0
0
1
0
0
1
0
0
1
0
0
0
0
0
0
1
0
0
1
0
0
1
0
0
0
0
0
3
0
0
4
0
0
2
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
5 3 1
6 4
.
7
Then R for animals 1 and 2 is
.3059 .2000
.0706
.4000 .2000
,
.2471
R for animal 3 is
0
0
.2692 .1538
,
.2308
.2 0 0
0 0
.
0
Suppose that
1.
A=
and
0 .5
0
1. .5 .5
1. .25
1.
2 1 1
3 2
G0 =
.
4
8
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
(33)
V ar(
a) is
3.0 2.0
4.0
0
0
0 1.0 .5 .5
0
0
0
0
0
0 .5 1.5 1.0
0
0
0
0
0
0 .5 1.0 2.0
0
0
0
2.0 1.0 1.0 1.0 .5 .5 1.0 .5 .5
3.0 2.0 .5 1.5 1.0 .5 1.5 1.0
4.0 .5 1.0 2.0 .5 1.0 2.0
2.0 1.0 1.0 .5 .25 .25
3.0 2.0 .25 .75 .5
4.0 .25 .5 1.0
2.0 1.0 1.0
3.0 2.0
4.0
(34)
Using the incidence matrix, R , G1 , and y we get the coefficient matrix of mixed
model equations in (26.35) . . . (26.37). The right hand side vector is (1.8588, 4.6235, .6077, 2.5674, 8.1113, 1.3529, -1.0000, 1.2353, .1059, .2000, .8706, 0, .1923, .4615, .4000,
0, 0)0 . The solution vector is (8.2451, -1.7723, 3.9145, 3.4054, .8066, .1301, -.4723, .0154,
-.2817, .3965, -.0911, -.1459, -.2132, -.2480, .0865, .3119, .0681).
Upper left 8 x 8 (times 1000)
7176 1000
353
1271
612 400
141
1069
554
1708
200
400
200
725
2191
71 200
247
7100
212 600
741
1208 546
824
(35)
306
918
200
71
282
308
77
38
200
71
0
0
0 200 0 0
600
212
0
0
0 800 0 0
400 200
0
269 154
0 0 0
200
247
0 154
231
0 0 0
.
800
988
0 308
462
0 0 0
77 38 615
154
77
0 0 0
269 115
154 538
231
0 0 0
115
192
77
231 385
0 0 0
(36)
1387 623
154 538
231
103 359
154
952
77
231
385
51
154
256
1346 615
0
0
0
.
1000
0
0
0
718 308
513
(37)
EM Algorithm
In spite of its possible slow convergence I tend to favor the EM algorithm for REML
to estimate variances and covariances. The reason for this preference is its simplicity
as compared to iterated MIVQUE and, above all, because the solution remains in the
parameter space at each round of iteration.
i = BLUP for the n breeding values
If the data were stored animals in traits and a
th
on the i trait, gij would be estimated by iterating on
j + trA1 Cij )/n,
gij = (
ai A1 a
(38)
10
6
7
8
9
REAL *8 A( ), C( ), U( ), S
INTEGER T
.
.
.
NT=N*T
DO 7 I=1, T
DO 7 J=I, T
S=0. DO
DO 6 K=1, N
DO 6 L=1, N
S=S+A(IHMSSF(K,L,N))*U(T*K-T+I)*U(T*L-T+J)
Store S
.
.
.
DO 9 I=1, T
DO 9 J=I, T
S=0. DO
DO 8 K=1, N
DO 8 L=1, N
S=S+A(IHMSSF(K,L,N))*C(IHMSSF(T*K-T+I,T*L-T+J,NT))
Store S
1.912
.859 .893
2.742 1.768
.
3.635
11
0i Qi e
i ,
e
i=1
i is the vector of BLUP of errors for the t traits in the ith animal. The e
are
where e
computed as follows
ij
(39)
eij = yij x0ij oj a
when yij is observed. eij is set to 0 for yij = 0. This is not BLUP, but suffices for
subsequent computations. At each round we iterate on
+ trQij WCW0 ) i = 1, . . . , tij , j = i, . . . , t.
trQij R = (
e0 Qij e
(40)
(41)
B1ij
0
B2ij
.
(42)
Qij =
...
Bnij
There are at most t2 1 unique Bkij for any Qij , these corresponding to the same number
of unique R
k . The B can be computed easily as follows. Let
R
k =
(f1 f2 . . . ft ).
Then
Bkii = fi fi0
Bkij =
(fi fj0 )
(fi fj0 )0
(43)
for i 6= j.
(44)
In computing trQij R remember that Q and R have the same block diagonal form. This
computation is very easy for each of the n products. Let
Bkij =
Then the coefficient of rii contributed by the k th animal in the trace is bii . The coefficient
of rij is 2bij .
Finally note that we need only the n blocks of order t t down the diagonals of
WCW0 for trQij WCW0 . Partition C as
C=
(45)
and then zeroed for missing rows and columns, although this is not really necessary since
the Qkij are correspondingly zeroed. Xi is the submatrix of X pertaining to the ith animal.
This submatrix has order t p.
We illustrate some of these computations for r. First, consider computation of Qij .
Let us look at B211 , that is, the block for the second animal in Q11 .
.3059 .2000
.0706
.4000 .2000
R2 = .2000
.
.0706 .2000
.2471
Then
B211
.3059
.0936 .0612
.0216
.0400 .0141
= .2000 (.3059 .2000 .0706) =
.
.0706
.0050
Look at B323 , that is, the block for the third animal in Q23 .
0
0
0
.2692 .1538
R3 = 0
.
0 .1538
.2308
Then
B323
0
0
0
0
.0621
= 0 .0414
+ ()
0
.0237 .03500
0
0
0
.0858
= 0 .0828
.
0
.0858 .0710
13
Next we compute trQij R. Consider the contribution of the first animal to trQ12 R.
.1224
B112 =
.1624 .0753
.1600
.0682
.
.0282
0 0
X3 = 0 0
0 0
Cxx
3.1506 .6444
3.5851
=
Cx3 =
0 0 0
1 0 0
.
0 1 2
2.7324 .3894
.5254
.0750
2.3081
.0305
.
30.8055 8.5380
2.7720
.2759
.2264
.1692
.0755 .0107
.7006
1.9642
C33 =
.9196 .9374
2.7786 1.8497
.
3.8518
1.9642
.3022 .2257
2.5471 1.6895
.
4.9735
Since the first trait was missing on animal 3, the block of WCW0 becomes
0
0
2.5471 1.6895
.
4.9735
.649412
.301176
.320000
.272941
.056471
.322215
.160000
.254118
.069758
.392485 .402840
.103669
.726892 .268653
.175331
14
.
4.965
15
Chapter 27
Sire Model, Multiple Traits
C. R. Henderson
1984 - Guelph
This section deals with a rather simple model in which there are t traits measured on the
progeny of a set of sires. But the design is such that only one trait is measured on any
progeny. This results in R being diagonal. It is assumed that each dam has only one
recorded progeny, and the dams are non-inbred and unrelated. An additive genetic model
is assumed. Order the observations by progeny within traits. There are t traits and k
sires. Then the model is
y1
X1
0 ... 0
1
X2 . . . 0
y2
0
2
. = .
..
.. ..
.
.
.
.
.
. .
yt
Xt
Z1 0 . . . 0
0 Z2 . . . 0
..
..
..
.
.
.
0
0
Zt
s1
s2
..
.
st
e1
e2
..
.
(1)
et
yi represents ni progeny records on trait i, i is the vector of fixed effects influencing the
records on the ith trait, Xi relates i to elements of yi , and si is the vector of sire effects
for the ith trait. It has k has a null column corresponding to such a sire.
V ar
s1
s2
..
.
st
..
..
..
= G.
.
.
.
Ab1t Ab2t . . . Abtt
(2)
A is the k k numerator relationship matrix for the sires. If the sires were unselected,
bij = gij /4, where gij is the additive genetic covariance between traits i and j.
V ar
e1
e2
..
.
et
Id1
0 ... 0
0
Id2 . . . 0
..
..
..
.
.
.
0
0 . . . Idt
1
= R.
(3)
..
0
d1
1 X 1 Z1 . . .
..
.
0
..
.
0
0
...
0
. . . d1
t X t Xt
1 0
1 0
0
d1 Z 1 Z1 . . .
d1 Z 1 X 1 . . .
..
..
..
.
.
.
0
. . . d1
t Z t Xt
0
Z
d1
X
t t
t
..
0
. . . d1
t Z t Zt
0
d1
1 X 1 y1
..
o1
.
.
.
0
..
.
ot
s1
..
.
st
0
d1
t X t yt
0
d1
1 Z 1 y1
..
.
0
d1
1 Z t yt
(4)
The mixed model equations are formed by adding G1 to the lower right (kt)2 submatrix
of (27.4), where
A1 b11 . . . A1 b1t
..
..
,
G1 =
(5)
.
.
1 1t
1 tt
A b
... A b
and bij is the ij th element of the inverse of
b11 . . . b1t
.
..
.
.
.
.
b1t . . . btt
With this model it seems logical to estimate di by
[y0 i yi ( oi )0 X0 i yi (uoi )0 Z0 i yi ]/[ni rank(Xi Zi )].
(6)
oi
uoi
X0 i yi
Z0 i yi
(7)
Then using these di , estimate the bij by quadratics in s, the solution to (27.4). The
quadratics needed are
s0i A1sj ;
i = 1, . . . , t;
j = i, . . . , t.
These are computed and equated to their expectations. We illustrate this section with a
small example. The observations on progeny of three sires and two traits are
2
Sire Trait
1
1
2
1
3
1
1
2
2
2
Progeny Records
5,3,6
7,4
5,3,8,6
5,7
9,8,6,5
Z1 =
0
0
0
1
1
0
0
0
0
0
0
0
0
0
1
1
1
1
, Z2 =
1
1
0
0
0
0
0
0
1
1
1
1
0
0
0
0
0
0
Suppose that
30 I9
0
0
25 I6
R=
1. .5 .5
A = .5 1. .25 ,
.5 .25 1.
and
b11 b12
b12 b22
3 1
1 2
Then
3. 1.5 1.5 1. .5 .5
3. .75 .5 1. .25
3.
.5
.25
1.
,
G=
2. 1. 1.
2. .5
2.
10 4 4 5
2
2
8
0
2 4
0
1
8
2
0
4
=
.
15 6 6
15
12
0
12
G1
X0 R1
Z0 R1
(X Z) =
1
150
45
0 15 10 20 0 0 0
36 0 0 0 12 24 0
15 0 0 0 0 0
10 0 0 0 0
20 0 0 0
12 0 0
24 0
0
(8)
Adding G1 to the lower 6 6 submatrix of (27.8) gives the mixed model coefficient
matrix. The right hand sides are [1.5667, 1.6, .4667, .3667, .7333, .48, 1.12, 0]. The
inverse of the mixed model coefficient matrix is
5.210
0.566
1.981
1.545
1.964
0.654
0.521
0.652
0.660
2.858
1.515
1.556
0.934
0.523
0.510
0.785
1.515
2.803
0.939
0.522
0.917
0.322
0.384
1.556
0.939
2.783
0.510
0.322
0.923
1.344
0.934
0.522
0.510
1.939
1.047
0.984
1.638
0.523
0.917
0.322
1.047
1.933
0.544
0.690
0.510
0.322
0.923
0.984
0.544
1.965
(9)
The solution to the MME is (5.2380, 6.6589, -.0950, .0236, .0239, -.0709, .0471, -.0116).
When multiple traits are observed on individual progeny, R is no longer diagonal. The
linear model can still be written as (27.1). Now, however, the yi do not have the same
number of elements, and Xi and Zi have varying numbers of rows. Further,
R=
I r11
P012 r12
..
.
P12 r12 . . .
I r22 . . .
..
.
P01t r1t
P02t r2t
P1t r1t
P2t r2t
..
.
(10)
. . . I rtt
The I matrices have order equal to the number of progeny with that trait recorded.
r11 . . .
.
.
.
r1t
..
.
r1t . . . rtt
is the error variance-covariance matrix. We can use the same strategy as in Chapter 25
for missing data. That is, each yi is the same length with 0s inserted for missing data.
4
Accordingly, all Xi and Zi have the same number of rows with rows pertaining to missing
observations set to 0. Further, R is the same as for no missing data except that rows
corresponding to missing observations are set to 0. Then the zeroed type of g-inverse of
R is
.
.
.
D1t D2t . . . Dtt
Each of the Dij is diagonal with order, n. Now the GLS equations for fixed s are
X0 1 D11 X1 . . .
.
.
.
X0 1 D1t Xt
..
.
X0 t D1t X1
Z0 1 D11 X1
..
.
. . . X0 t Dtt Xt
. . . Z0 1 D1t Xt
..
.
X0 t D1t Z1
Z0 1 D11 Z1
..
.
Z0 t D1t X1
. . . Z0 t Dtt Xt
Z0 t D1t Z1
o1
X0 1 D1t Zt
..
..
.
.
o
0
. . . X t Dtt Zt
t
0
1
. . . Z 1 D1t Zt
s
.
..
.
.
.
X0 1 D11 Z1 . . .
..
.
. . . Z0 t Dtt Zt
X0 1 D11 y1 + . . . + X0 1 D1t yt
..
..
.
.
X0 t D1t y1 + . . . + X0 t Dtt yt
Z0 1 D11 y1 + . . . + Z0 1 D1t yt
..
..
.
.
Z0 t D1t y1
+ . . . + Z0 t Dtt yt
st
With G1 added to the lower part of (27.12) we have the mixed model equations.
We illustrate with the following example.
Trait
Sire Progeny 1 2
1
1
6 5
2
3 5
3
- 7
4
8 2
5
4 6
6
- 7
7
3 3
8
5 4
9
8 We assume the same G as in the illustration of Section 27.1, and
r11 r12
r12 r22
=
5
30 10
10 25
(12)
We assume that the only fixed effects are 1 and 2 . Then using the data vector with
length 13, ordered progeny in sire in trait,
X01 = (1 1 1 1 1 1 1), X02 = (1 1 1 1 1 1),
Z1 =
1
1
1
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
1
, Z2 =
1
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
1
R=
30
0
30
0
0
30
0
0
0
30
0
0
0
0
30
0
0
0
0
0
30
0 10 0
0 0 10
0 0 0
0 0 0
0 0 0
0 0 0
30 0 0
25 0
25
0 0
0 0
0 0
0 10
0 0
0 0
0 0
0 0
0 0
25 0
25
0 0
0 0
0 0
0 0
0 0
0 10
0 0
.
0 0
0 0
0 0
0 0
25 0
25
0.254
0.061
0.110
0.072
0.071
0.031
0.015
0.015
0.061
0.110
0.072
0.072 0.031 0.015 0.015
0.265 0.031 0.015 0.015
0.132
0.086
0.046
0.031
0.110
0.0
0.0
0.031
0.0
0.0
0.015
0.0
0.072
0.0
0.0
0.015
0.0
0.015
0.0
0.0
0.072
0.0
0.0
0.015
0.132 0.031
0.0
0.0
0.132
0.0
0.0
0.086
0.0
0.015
0.0
0.0
0.086
0.0
0.046
0.0
0.0
0.015
0.0
0.0
0.046
(13)
G1 is added to the lower 6 6 submatrix to form the mixed model coefficient matrix.
The right hand sides are (1.0179, 1.2062, .4590, .1615, .3974, .6031, .4954, .1077). The
6.065
1.607
2.111
1.735
1.709
0.702
0.602
0.546
0.711
2.880
1.533
1.527
0.953
0.517
0.506
0.625
1.533
2.841
0.893
0.517
0.938
0.303
.
0.519
1.527
0.893
2.844
0.506
0.303
0.944
1.472
0.953
0.517
0.506
1.939
1.028
1.003
1.246
0.517
0.938
0.303
1.028
1.924
0.562
1.017
0.506
0.303
0.943
1.002
0.562
1.936
(14)
The solution is [5.4038, 5.8080, .0547, -.1941, .1668, .0184, .0264, -.0356].
If we use the technique of including in y, X, Z, R, G the missing data we have
X01 = (1 1 0 1 1 0 1 1 1),
Z1 =
1
1
0
1
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
1
1
X02 = (1 1 1 0 1 1 0 1 0),
, Z2 =
1
1
1
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
1
0
The methods of Section 27.2 could be used for sire evaluation using progeny with repeated
records (lactations, e.g.), but we do not wish to invoke the simple repeatability model.
Then lactation 1 is trait 1, lactation 2 is trait 2, etc.
7
Chapter 28
Joint Cow and Sire Evaluation
C. R. Henderson
1984 - Guelph
At the present time, (1984), agencies evaluating dairy sires and dairy females have
designed separate programs for each. Sires usually have been evaluated solely on the
production records of their progeny. With the development of an easy method for computing A1 this matrix has been incorporated by some agencies, and that results in the
evaluation being a combination of the progeny test of the individual in question as well
as progeny tests of his relatives, eg., sire and paternal brothers. In addition, the method
also takes into account the predictions of the merits of the sires of the mates of the bulls
being evaluated. This is an approximation to the merits of the mates without using their
records.
In theory one could utilize all records available in a production testing program
and could compute A1 for all animals that have produced these records as well as
additional related animals without records that are to be evaluated. Then these could be
incorporated into a single set of prediction equations. This, of course, could result in a set
of equations that would be much too large to solve with existing computers. Nevertheless,
if we are willing to sacrifice some accuracy by ignoring the fact that animals change herds,
we can set up equations that are block diagonal in form that may be feasible to solve.
0
1
2
..
.
k
r0
r1
r2
..
.
rk
0 = ( 0 a0 ),
0
0
0
i = ( i ai ).
Then with this form of the equations the herd unknowns can be absorbed into the 0
and a0 equations provided the Cii blocks can be readily inverted. Otherwise one would
need to solve iteratively. For example, one might first solve iteratively for 0 and a0 sires
ignoring i , ai . Then with these values one would solve iteratively for the herd values.
Having obtained these one would re-solve for 0 and the a0 values, adjusting the right
hand sides for the previously estimated herd values.
The AI sire equations would also contain values for the base population sires. A
base population dam with records would be included with the herd in which its records
were made. Any base population dam that has no records, has only one AI son, and has
no female progeny can be ignored without changing the solution.
The simplest example of joint cow and sire evaluation with multiple herds involves a single
trait and with only one record per tested animal. We illustrate this with the following
example.
2
1 .5 .5 0 0 .25
.25
0
1 .25 .5 0 .5
.375 0
1 0 0 .125
.5
.5
1
0
.25
.5
0
1 .5
0
0
1
.1875
0
1
.25
0 .25
.25
0
.5
.125
0 .375
.5
0 .25
0
0
0
0
0 .25 .0625
0 .3125 .25
0
.5
.25
1
0
.5
1
.1875
1
(1)
A1 shown in (28.2)
2 1 1 .5
3
0 1
3 .5
0
0
0 .5 0
0
0
.5 1
0 .5 0
1
0
0
0 1 1 .5
0 1
0
0 1
0 0
0
0
1.5 1
0
0 0
0
0
2
0
0 0
0
0
2
0 0
0
0
2 0
1
0
1.5
0 1
2
0
2
(2)
Note that the lower 8 8 submatrix is block diagonal with two blocks of order 4 4 down
the diagonal and 4 4 null off-diagonal blocks. The model assumed for our illustration is
yij = i + aij + eij ,
3
where i refers to herd and j to individual within herd. Then with ordering
(a1 , a4 , a5 , 1 , a2 a6 , a8 , a11 , 2 , a3 a7 , a9 , a10 )
the incidence matrix is as shown in (28.3). Note that o does not exist.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
1
1
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
(3)
6 3 3 0 1.5 0
0
0 0 1.5 0
0
0
9
0
0 3 1.5 3
0 0 1.5 0 3
0
9
0
1.5
0
0
3
0
3
1.5
0
3
4 1
1
1
1 0 0
0
0
0
7
0
0 3 0 0
0
0
0
5.5 3
0 0 0
0
0
0
7
0 0 0
0
0
0
7
0
0
0
0
0
4 1
1
1
1
7
0 3
0
5.5
0 3
7
0
a
1
a
4
a
5
1
a
2
a
6
a
8
a
11
2
a
3
a
7
a
9
a
10
0
0
0
16
3
2
5
6
21
7
9
2
3
(4)
Note that the lower 10 10 block of the coefficient matrix is block diagonal with 5 5
blocks down the diagonal and 5 5 null blocks off-diagonal. The solution to (28.4) is
[-.1738, -.3120, -.0824, 4.1568, -.1793, -.4102, -.1890, .1512, 5.2135, .0857, .6776, .5560, -.0611].
Note that the solution to (a1 , a4 , a5 ) could be found by absorbing the other equations as
follows.
4 1 1
1
1
0 1.5 0
0
0
7 0
0 3
6 3 3
9
0
5.5 3
0
0 3 1.5 3 0
9
0 1.5 0
0 3
7
0
0
0
0
1.5 3 1.5
0 1.5 0
0 3 0
0
0 3
4 1 1
1
1
7 0 3
0
0 1.5 0
0
0
0 1.5 0 3
5.5
0 3
0
7
0
0
3
1.5
0
3
0
0
0
1.5 1.5 3
0
0 1.5 0
0
0
a
1
0
0 1.5
0
a4 = 0 0 3 1.5 3
0 3 0
0
0
1.5
0
0
3
a
0
0 3
4 1
1
1
1
0
0 3
5.5 3
0
7
0
4 1
16
3
2
5
6
0 1.5 0
0
0
0 1.5 0 3
0
0 3 1.5
0 3
1
1
1
0 3
0
5.5
0 3
7
0
21
7
9
2
3
Iterations on these equations were carried out by two different methods. First, the herd
equations were iterated 5 rounds with AI sire values fixed. Then the AI sire equations were
iterated 5 rounds with the herd values fixed and so on. It required 17 cycles (85 rounds)
to converge to the direct solution previously reported. Regular Gauss-Seidel iteration
produced conversion in 33 rounds. The latter procedure would require more retrieval of
data from external storage devices.
As our next example we use the same animals as before but now we have records as
follows.
Herd 1
Years
Cow 1 2 3
2
5 6 6
4 5 3
8
- 7 6
11
- - 8
Herd 2
Years
Cow 1 2 3
3
8 - 7
9 8 7
9
- 8 8
10
- - 7
6
We assume a model,
yijk = ij + aik + pik + eijk .
i refers to herd, j to year, and k to cow. It is assumed that h2 = .25, r = .45. Then
V ar(a) = A e2 /2.2.
V ar(p) = I e2 /2.75.
e2 /a2 = 2.2, e2 /p2 = 2.75
The diagonal coefficients of the p equations of OLS have added to them 2.75. Then p
can be absorbed easily. This can be done without writing the complete equations by
weighting each observation by
2.75
nik + e2 /2.75
where nik is the number of records on the ik th cow. These weights are .733, .579, .478
for 1,2,3 records respectively. Once these equations are derived, we then add 2.2 A1 to
appropriate coefficients to obtain the mixed model equations. The coefficient matrix is
in (28.5) . . . (28.7), and the right hand side vector is (0, 0, 0, 4.807, 9.917, 10.772, 6.369,
5.736, 7.527, 5.864, 10.166, 8.456, 13.109, 5.864, 11.472, 9.264, 5.131). The unknowns are
in this order (a1 , a4 , a5 , 11 , 12 , 13 , a2 , a6 , a8 , a11 , 21 , 22 , 23 , a3 , a7 , a9 , a10 ). Note
that block diagonality has been retained. The solution is
[.1956, .3217, .2214, 4.6512, 6.127, 5.9586, .2660, -.5509, .0045, .5004, 8.3515, 7.8439,
7.1948, .0377, .0516, .2424, .0892].
Upper 8 8
6.6
0
6.6
0
0
0
1.057
0
0
0
0
1.636
0
1.1
0
2.2
0
1.1
0
.579
0
.579
1.79
0
5.558
0
1.1
0
.478
.478
.478
0
4.734
(5)
0
0
0 0 0 1.1
0
0
0
2.2
0
0 0 0 1.1
0 2.2
0
0
2.2 0 0 0 2.2 1.1
0
2.2
0
0
0 0 0
0
0
0
0
.579
0
0 0 0
0
0
0
0
.579 .733 0 0 0
0
0
0
0
0
2.2 0 0 0
0
0
0
0
2.2
0
0 0 0
0
0
0
0
7
(6)
Lower right 9 9
5.558
0
5.133
0
0
1.211
0
0
0
1.057
0
0
0
0
1.79
0
0
.733
0
0
5.133
0
0
0
0
0
0
.478
0
0
.478 .579
0
0
2.2
0
4.734
0
2.2
5.558
0
5.133
(7)
Multiple Traits
As a final example of joint cow and sire evaluation we evaluate on two traits. Using the
same animals as before the records are as follows.
Herd 1
Trait
Cow 1 2
2
6 8
6
4 6
8
9
11
3
Herd 2
Trait
Cow 1 2
3
7
7
2
9
8
10 6 9
We assume a model,
yijk = ij + aijk + eijk ,
where i refers to herd, j to trait, and k to cow. We assume that the error variancecovariance matrix for a cow and the additive genetic variance-covariance matrix for a
non-inbred individual are
!
!
5 2
2 1
and
,
2 8
1 3
respectively. Then R is
5 2 0 0
8 0 0
5 2
0
0
0
0
5
0
0
0
0
0
8
0
0
0
0
0
0
5
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
0
2
8
The right hand sides of the mixed model equations are (0, 0, 0, 0, 0, 0, 3.244, 1.7639,
.8889, .7778, .5556, .6111, 1.8, 0, 0, .375, 2.3333, 2.1167, 1.4, 0, 0, .25, 0, 1., .8333, .9167)
corresponding to ordering of equations,
[a11 , a12 , a41 , a42 , a51 , a52 , 11 , 12 , a21 , a22 , a61 , a62 , a81 , a82 , a11,1 , a11,2 , 21 , 22 , a31 , a32 ,
a71 , a72 , a91 , a92 , a10,1 , a10,2 ].
The coefficient matrix is block diagonal with two 10 10 blocks in the lower diagonal
and with two 10 10 null blocks off-diagonal. The solution is
(.2087, .1766, .4469, .4665, .1661, .1912, 5.9188, 5.9184, .1351, .3356, -.1843, .1314, .6230,
.5448, -.0168, -.2390, 6.0830, 6.5215, .2563, .2734, -.4158, -.9170, .4099, .5450, -.0900,
.0718).
Summary Of Methods
4. Zi ai : additive genetic values of all females that have made records in the ith herd.
Some of these may be dams of AI sires. Others will be daughters of AI sires, and
some will be both dams and daughters of different AI sires. Zi ai will also contain
any sire with daughters only in the ith herd or with daughters in so few other herds
that this is ignored, and he is regarded as a different sire in each of the other herds.
One will need to decide how to handle such sires, that is, how many to include with
AI sires and how many to treat as a separate sire in each of the herds in which he
has progeny.
5. A1 should be computed by Hendersons simple method, possibly ignoring inbreeding
in large data sets, since this reduces computations markedly. In order to generate
block diagonality in the mixed model equations the elements of A1 for animals in
Zi ai should be derived only from sires in a0 and from dams and sires in ai (same
herd). This insures that there will be no non-zero elements of A1 between any pair
of herds, provided ordering is done according to the following
(1)
(2)
(3)
(4)
(5)
(6)
X0 0
Z0 a0
X1 1
Z1 a1
X2 2
Z2 a2
..
.
etc.
Quaas and Pollak (1980) described a gametic additive genetic model that reduces the
number of equations needed for computing BLUP. The only breeding values appearing
in the equations are those of animals having tested progeny. Then individuals with no
progeny can be evaluated by taking appropriate linear functions of the solution vector.
The paper cited above dealt with multiple traits. We shall consider two situations, (1)
single traits with one or no record per trait and (2) single traits with multiple records and
the usual repeatability model assumed. If one does not choose to assume the repeatability
model, the different records in a trait can be regarded as multiple traits and the Quaas
and Pollak method used.
10
6.1
Za =
.5 (Sum of parental a
).
ei = yi xi o zi u
11
1 .5 0
A=
1 .25
1
1.5
.1
.2
.1
.3
.475 .25
0
2 =
u
.6
0
3
u
.433333
u4
The solution is
12
3.7
.5
.3
.2
.8
(8)
1
2
1
3
1
0
.5
.5
0
1
.5
0
G=
4 0
0 4
12 = 10 + .5(4), 13 = 10 + .75(4).
Then the mixed model equations are
.390064 .020833
.370833
3.112821
u1 = .891026 .
.383333
u2
(9)
The solution is
(2.40096, .73264, .57212).
(10)
=
=
=
=
6.2
This section is concerned with multiple records in a single trait and under the assumption
that
yi1
1 r r
yi2
r 1 r 2
=
V ar
yi3
r r 1 y ,
..
.. .. ..
.
. . .
where y has been adjusted for random factors other than producing ability and random
error. The subscript i refers to a particular animal. The model is
y = X + Za a + Zp p + possibly other random factors + e.
13
Aa2 0
0
a
2
V ar p = 0 Ip 0 .
e
0
0 Ie2
14
These two methods for repeated records are illustrated with the same animals as in
Section 28.8 except now there are repeated records. The 4 animals have 2,3,1,2 records
respectively. These are (5,3,4,2,3,6,7,8). X0 = (1 2 3 1 2 2 3 2). Let
a2 = .25,
p2 = .20,
e2 = .55.
Then the regular mixed model equations are in (28.11).
65.455
11.455 4.0
0
0
5.455
0
0
9.818
0
0
0
1.818
0
8.970
0
0
0
3.636
8.636
0
0
0
10.455
0
0
6.818
0
8.636
a
=
145.455
14.546
16.364
10.909
27.273
14.546
16.364
10.909
27.273
(11)
and p
are identical. The solution is
Note that the right hand sides for a
(1.9467, .8158, .1972, .5660, 1.0377, .1113, .3632, .4108, .6718).
Next the solution for the gametic
incidence matrix is
1 1
2 1
3 0
1 0
2 0
2 .5
3 .5
2 .5
(12)
1
1
0
0
0
0
0
0
15
0
0
1
1
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
corresponding to , a1 , a2 , p1 , p2 , p3 , p4 .
V ar(e) = .55 I,
!
.25 0
V ar(a) =
,
0 .25
V ar(p) = diag (.2, .2, .325, .3875).
Then the mixed model equations are
9.0
.454 3.636
0
.909 1.818
9.909
0
5.454 .909
0
8.636
0
0
0
10.454
0
0
4.895
0
6.217
145.454
a1
33.636
a
21.818
2
p1 = 14.546
p2
16.364
10.909
p3
27.273
p4
(13)
16
(14)
Chapter 29
Non-Additive Genetic Merit
C. R. Henderson
1984 - Guelph
All of the applications in previous chapters have been concerned entirely with additive
genetic models. This may be a suitable approximation, but theory exists that enables
consideration to be given to more complicated genetic models. This theory is simple for
non-inbred populations, for then we can formulate genetic merit of the animals in a sample
as
X
g=
gi .
i
g is the vector of total genetic values for the animals in the sample. gi is a vector
describing values for a specific type of genetic merit. For example, g1 represents additive
values, g2 dominance values, g3 additive additive, g4 additive by dominance, etc. In a
non-inbred, unselected population and ignoring linkage
Cov(gi , gj0 ) = 0
for all pairs of i 6= j.
V ar(additive) = Aa2 ,
V ar(dominance) = Dd2 ,
2
V ar(additive additive) = A#Aaa
,
2
V ar(additive dominance) = A#Dad
,
2
V ar(additive additive dominance) = A#A#Daad
, etc.
The # operation on A and D is described below. These results are due mostly to Cockerham (1954). D is computed as follows. All diagonals are 1. dkm (k 6= m) is computed
from certain elements of A. Let the parents of k and m be g, h and i, j respectively. Then
dkm = .25(agi ahj + agj ahi ).
(1)
In a non-inbred population only one at most of the products in this expression can be
greater than 0. To illustrate suppose k and m are full sibs. Then g = i and h = j.
1
Consequently
dkm = .25[(1)(1) + 0] = .25.
Suppose k and m are double first cousins. Then
dkm = .25[(.5)(.5) + 0] = .0625.
For non-inbred paternal sibs from unrelated dams is
dkm = .25[1(0) + 0(0)] = 0,
and for parent-progeny dkm = 0.
The # operation on two matrices means that the new matrix is formed from the
products of the corresponding elements of the 2 matrices. Thus the ij th element of A#A
is a2ij , and the ij th element of A#D is aij dij . These are called Hadamard products.
Accordingly, we see that all matrices for V ar(gi ) are derived from A.
We shall describe BLUP procedures and estimation of variances in this and subsequent
sections of Chapter 29 by a model with additive and dominance components. The extension to more components is straightforward. The model for y with no data missing
is
y = (X I I) a + e .
d
y is n x 1, X is n x p, both I are n x n, and e is n x 1, is p x 1, a and d are n x 1.
V ar(a) = Aa2 ,
V ar(d) = Dd2 ,
V ar(e) = Ie2 .
Cov(a, d0 ), Cov(a, e0 ), and Cov(d, e0 ) are all n n null matrices. Now the mixed model
equations are
o
X0 X
X0
X0
X0 y
1 2
2
= y
I + A e /a
I
X
a
.
1 2
2
X
I
I + D e /d
y
d
(2)
Note that if a, d were regarded as fixed, the last n equations would be identical to the
p + 1, . . . , p + n equations, and we could estimate only differences among elements of
(3)
X0 y
y
(4)
in (29.4)
Note that the coefficient matrix of (29.4) is not symmetric. Having solved for a
compute d by (29.3).
a2 , d2 , e2 can be estimated by MIVQUE. Quadratics needed to be computed and
equated to their expectations are
0 D1 d,
and e
0 A1 a
, d
0 e
.
a
(5)
To obtain expectations of the first two of these we need V ar(r), where r is the vector of
right hand sides of (29.2). This is
X0 AX X0 A X0 A
X0 DX X0 D X0 D
2
2
A
A a + DX
D
D
AX
d
AX
A
A
DX
D
D
X0 X X 0 X 0
2
I I
+ X
e .
X
I I
(6)
C = Ca .
Cd
Ca and Cd each have n rows. Then
= Ca r,
a
and
= Cd r.
d
V ar(
a) = Ca V ar(r)C0a ,
(7)
= Cd V ar(r)C0 .
V ar(d)
d
(8)
and
) = trA1 V ar(
E(
a0 A1 a
a).
= trD1 V ar(d).
0 D1 d)
E(d
(9)
(10)
0 e
we compute tr(V ar(
For the expectation of e
e)). Note that
X0
= I (X I I) C I
e
y
I
= (I XC11 X0 XC12 C012 X0 XC13 C013 X0
C22 C23 C023 C33 )y
Ty
where
(11)
C
C=
012 C22 C23 .
C013 C023 C33
(12)
V ar(
e) = T V ar(y)T0 .
(13)
(14)
Then
REML by the EM type algorithm is quite simple to state. At each round of iteration
V ar(
we need the same quadratics as in (29.5). Now we pretend that V ar(
a), V ar(d),
e)
are represented by the mixed model result with true variance ratios employed. These are
V ar(
a) = Aa2 C22 .
= D 2 C33 .
V ar(d)
d
V ar(
e) = Ie2 WCW0 .
C22 , C33 , C are defined in (29.12).
W = (X I I).
a2 = (
a0 A a
(15)
1
0D d
+ trD1 C33 )/n,
d2 = (d
(16)
and
+ trWCW0 )/n.
e2 = (
e0 e
(17)
This algorithm guarantees that at each round of iteration all estimates are non-negative
provided the starting values of e2 /a2 , e2 /d2 are positive.
4
In this section we use the same model as in Section 29.2, except now some animals have
no record but we wish to evaluate them in the mixed model solution. Let us order the
animals by the set of animals with no record followed by the set with records.
y = (X 0 I 0
I)
am
ap
dm
dp
+e.
(18)
The subscript, m, denotes animals with no record, and the subscript, p, denotes animals
with a record. Let there be np animals with a record and nm animals with no record.
Then y is np x 1, X is np x p, the 0 submatrices are both np x nm , and the I submatrices
are both np x np . The OLS equations are
X0 X
0
X
0
X0
0 X0
0 0
0 I
0 0
0 I
o
0 X0
am
0 0
p =
0 I a
0 0
d
m
p
0 I
d
X0 y
0
y
0
y
(19)
The mixed model equations are formed by adding A1 e2 /a2 and D1 e2 /d2 to the appropriate submatrices of matrix (29.19).
We illustrate these equations with a simple example. We have 10 animals with animals
1,3,5,7 not having records. 1,2,3,4 are unrelated, non-inbred animals. The parents of 5
and 6 are 1,2. The parents of 7 and 8 are 3,4. The parents of 9 are 6,7. The parents of
10 are 5,8. This gives
1 0 0 0 .5 .5 0 0
1 0 0 .5 .5 0 0
1 0 0 0 .5 .5
1 0 0 .5 .5
1 .5 0 0
A=
1 0 0
1 .5
.25
.25
.25
.25
.25
.5
.5
.25
1
.25
.25
.25
.25
.5
.25
.25
.5
.25
1
6.
0
1.
4.5 2.25
5.5
0
1.
0
1.
0
1.
1.
1.
0
0 2.25 2.25
0
0
0
0
0
0 2.25 2.25
0
0
0
0
4.5 2.25
0
0 2.25 2.25
0
0
5.5
0
0 2.25 2.25
0
0
5.625
0
0 1.125
0 2.25
6.625 1.125
0 2.25
0
5.625
0 2.25
0
6.625
0 2.25
5.5
0
5.5
(20)
0
0
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
1
(21)
Lower right 10 10
5.0
0
6.0
0
0
5.0
0
0
0
6.0
0
0
0
0
0
0
0
0
5.333 1.333
6.333
0
0
0
0
0
0
0
0
0
0
0
0
5.333 1.333
6.333
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6.02 .314
6.02
(22)
e2 = (y0 y y0 X o y0 a
2
0 1
2
+ tr
a = (
aA a
e Caa )/n,
and
0 D1 d
+ tr
d2 = (d
e2 Cdd )/n,
for
0 = (
0p ),
a
a0m a
0 = (d
0 d
0 ),
d
m
p
and n = number of animals. A g-inverse of (29.19) is
When there are several genetic components in the model, a much more efficient computing
strategy can be employed than that of Section 29.3. Let m be total genetic value of the
members of a population, and this is
m=
X
i
gi ,
where gi is the merit for a particular type of genetic component, additive for example.
Then in a non-inbred population and ignoring linkage
V ar(m) =
V ar(gi )
since
Cov(gi , gj0 ) = 0
for all i 6= j. Then a model is
y = X + Zm m + e.
(23)
We could, if we choose, add a term for other random components. Now mixed model
equations for BLUE and BLUP are
X0 R1 X
X0 R1 Zm
Z0m R1 X Z0m R1 Zm + [V ar(m)]1
X0 R1 y
Z0m R1 y
(24)
(25)
This method is illustrated by the example of Section 29.2. Except for scaling
V ar(e) = I,
V ar(a) = 2.251 A,
V ar(d) = 51 D.
Then
V ar(m) = 2.251 A + 51 D
.6444
0
.6444
0
0
.6444
0 .2222 .2222
0
0
0 .2222 .2222
0
0
0
0
0 .2222 .2222
.6444
0
0 .2222 .2222
.6444 .2722
0
0
.6444
0
0
.6444 .2722
.6444
.1111
.1111
.1111
.1111
.1111
.2222
.2222
.1111
.6444
.1111
.1111
.1111
.1111
.2222
.1111
.1111
.2222
.1236
.6444
(26)
Adding the inverse of this to the lower 10 10 block of the OLS equations of (29.27) we
obtain the mixed model equations. The OLS equations including animals with missing
records are
6 0 1 0
0 0 0
1 0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
1
38
0
6
0
9
0
6
0
7
4
6
(27)
e2 = (y0 y y0 X o y0 Zm m)/[n
rank (X)].
are computed from m
and d
as described in this section. With the scaling done
Then a
G = Aa2 /e2 + Dd2 /e2 ,
Caa = (a2 /e2 )A (a2 /e2 )AG1 (G Cmm )G1 A(a2 /e2 ),
Cdd = (d2 /e2 )D (d2 /e2 )DG1 (G Cmm )G1 D(d2 /e2 ),
where a g-inverse of the reduced coefficient matrix is
Cxx Cxm
C0xm Cmm
In our example Caa for both the extended and the reduced equations is
.4179 .0001 .0042
.0224
.3651 .0224
.0571
.4179 .0001
.3651
.2057
.1847
.0112
.0428
.4100
.1856
.1802
.0196
.0590
.1862
.3653
.0112
.0428
.2057
.1847
.0287
.0365
.4100
.0196
.0590
.1856
.1802
.0365
.0618
.1862
.3653
.0942
.1176
.1062
.1264
.1108
.1953
.2084
.1305
.3859
.1062
.1264
.0942
.1176
.2084
.1305
.1108
.1953
.1304
.3859
Similarly Cdd is
.2
0
.1786
0
0
0
0
0 .0034 .0016 .0064
.2
0
0
0
.1786 .0008 .0030
.1986 .0444
.1778
0
0
0
0
0
0
.2
0
.0030
0
.0064
.0007
.0027
0
.1778
0
.0045
0
.0047
.0016
.0062
0
.0047
.1778
0
.0047
0
.0045
.0011
.0045
0
.0062
.0136
.1778
Multiple or No Records
Next consider a model with repeated records and the traditional repeatability model.
That is, all records have the same variance and all pairs of records on the same animal
have the same covariance. Ordering the animals with no records first the model is
y = [X 0 Z 0 Z Z]( : am : ap : dm : dp t)0 + e.
(28)
X0 X
0
Z0 X
0
Z0 X
Z0 X
0
0
0
0
0
0
X0 Z
0
Z0 Z
0
Z0 Z
Z0 Z
0
0
0
0
0
0
X0 Z
0
Z0 Z
0
Z0 Z
Z0 Z
X0 Z
0
Z0 Z
0
Z0 Z
Z0 Z
10
o
m
a
p
a
m
d
p
d
t
X0 y
0
Z0 y
0
Z0 y
Z0 y
(29)
The mixed model equations are formed by adding A1 e2 /a2 , D1 e2 /d2 , and Ie2 /t2 to
appropriate blocks in (29.29).
We illustrate with the same 10 animals as in the preceding section, but now there are
multiple records as follows.
Records
Animals 1 2 3
1
X X X
2
6 5 4
3
X X X
4
9 8 X
5
X X X
6
6 5 6
7
X X X
8
7 3 X
9
4 5 X
10
6 X X
X denotes no record. We assume that the first records have a common mean 1 , the
second a common mean 2 , and the third a common mean 3 . It is assumed that e2 /a2
= 1.8, e2 /d2 = 4, = e2 /t2 = 4. Then the mixed model coefficient matrix is in (29.30)
. . . (29.32). The right hand side vector is (38, 26, 10, 0, 15, 0, 17, 0, 17, 0, 10, 9, 6, 0, 15,
0, 17, 0, 17, 0, 10, 9, 6, 15, 17, 17, 10, 9, 6). The solution is
0
11
Upper left 13 13
6.0
0
5.0
0
0
2.0
0
0
0
3.6
1.0
1.0
1.0
1.8
6.6
0 1.0
0
1.0
0
1.0
1.0
1.0
0 1.0
0
1.0
0
1.0
1.0
0
0
0
0
1.0
0
0
0
0
0
0 1.8 1.8
0
0
0
0
0
0 1.8 1.8
0
0
0
0
3.6 1.8
0
0 1.8 1.8
0
0
5.6
0
0 1.8 1.8
0
0
4.5
0
0
.9
0 1.8
7.5
.9
0 1.8
0
4.5
0 1.8
0
6.5
0 1.8
5.6
0
4.6
(30)
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
2
0
0
1
1
0
0
0
0
0
0
0
0
0
2
0
12
1
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
0
3
0
0
0
0
0
0
0
0
1
1
0
0
0
0
2
0
0
0
0
0
0
1
1
1
0
0
0
0
0
3
0
0
0
0
1
1
0
0
0
0
0
0
0
0
2
0
0
1
1
0
0
0
0
0
0
0
0
0
2
0
1
0
0
0
0
0
0
0
0
0
0
0
1
(31)
Lower right 16 16
4 0 0 0
7 0 0
4 0
0
0
0
0
0
0
0
0
4.267 1.067
7.267
0
0
0
0
0
0
0
0
0
0
0
0
4.267 1.067
6.267
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6.016 .251
5.016
0
3
0
0
0
0
0
0
0
0
7
0
0
0
2
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
7
0
0
0
0
0
0
0
2
0
0
0
0
0
6
0
0
0
0
0
0
0
0
2
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
5
(32)
gi + t,
where gi have the same meaning as before, and t is permanent environmental effect with
one
V ar(t) = It2 . Then the mixed model equations are like those of (29.24) and from m
i and t.
can compute g
Using the same example as in Section 29.5 the OLS equations are
6 0 0 0
5 0 0
2 0
1
1
1
0
3
0
0
0
0
0
0
1
1
0
0
0
0
2
0
0
0
0
0
0
0
0
1
1
1
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
13
1
1
0
0
0
0
0
0
0
0
2
1
1
0
0
0
0
0
0
0
0
0
2
1
0
0
0
0
0
0
0
0
0
0
0
1
38
26
10
0
15
0
17
0
17
0
10
9
6
1
576
608
0
608
0
0
608
0 160 160
0
0 80 80
0 160 160
0
0 .80 80
0
0
0 160 160 80 80
608
0
0 160 160 80 80
608 196
0
0 80 160
.
608
0
0 160 80
608 80 160
608 89
608
Adding the inverse of this to the lower 10 10 block of the OLS equations we obtain the
mixed model equations. The solution is
(
1 ,
2 ,
3 ) = (6.398, 5.226, 5.287),
the same as before, and
= (.067, .500, .364, 1.707, .182, .073, .001,
m
.431, .835, .253)0 .
Then
= V ar(a)[V ar(m)]1 m
= same as before.
a
= V ar(d)[V ar(m)]1 m
= same as before.
d
t = V ar(t)[V ar(m)]1 m
= same as before
recognizing that ti for an animal with no record is 0.
To compute EM type REML iterate on
e2 = [y0 y (soln. vector)0 rhs]/[n rank(X)].
Compute Caa , Cdd , Ctt as in Section 29.4. Now, however, Ctt will have dimension, 10,
rather than 6 in order that the matrix of the quadratic in t at each round of iteration will
be I. If we did not include missing ti , a new matrix would need to be computed at each
round of iteration.
14
Chapter 30
Line Cross and Breed Cross Analyses
C. R. Henderson
1984 - Guelph
This chapter is concerned with a genetic model for line crosses, BLUP of crosses, and
estimation of variances. It is assumed that a set of unselected inbred lines is derived from
some base population. Therefore the lines are assumed to be uncorrelated.
Genetic Model
We make the assumption that the total genetic variance of a population can be partitioned into additive + dominance + (additive additive) + (additive dominance),
etc. Further, in a non-inbred population these different sets of effects are mutually uncorrelated, e.g., Cov (additive, dominance) = 0. The covariance among sets of effects can
be computed from the A matrix. Methods for computing A are well known. D can be
computed as described in Chapter 29.
V ar(additive effects) = Aa2 .
V ar(dominance effects) = Dd2 .
2
.
V ar(additive dominance) = A#Dad
2
, etc.
V ar(additive additive dominance) = A#A#Daad
If lines are unrelated, the progeny resulting from line crosses are non-inbred and consequently the covariance matrices for the different genetic components can be computed for
the progeny. Then one can calculate BLUP for these individual animals by the method
described in Chapter 29. With animals as contrasted to plants it would seem wise to
include a maternal influence of line of dam in the model as described below. Now in order
to reduce computational labor we shall make some simplifying assumptions as follows.
1
1+f
2f
..
2f
1+f
From this result we can calculate the covariance between any random pair of individuals
from the same cross or a random individual of one cross with a random individual of
another cross. We illustrate first with single crosses. Consider line cross, 1 2, line 1
being used as the sire line. Two random progeny pedigrees can be visualized as
1a
1b
&
&
pa
pb
2a
2b
Therefore
a1a,1b = a2a,2b = 2f.
apa,pb = .25(2f + 2f ) = f.
dpa,pb = .25[2f (2f ) + 0(0)] = f 2 .
Then the genetic covariance between 2 random members of any single cross is equal to
the genetic variance of single cross means
2
2
+ f 3 ad
+ etc.
= f a2 + f 2 d2 + f 2 aa
Note that if f = 1, this simplifies to the total genetic variance of individuals in the
population from which the lines were derived.
Next consider the covariance between crosses with one common parental line, say 1
2 with 1 3.
1a
1b
&
&
pa
pb
%
2a
%
3b
As before, a1a,1b = 2f, but all other relationships among parental pairs are zero. Then
apa,pb = .25(2f ) = .5f.
dpa,pb = 0.
2
+ ..., etc.
Covariance = .5f a2 + .25f 2 aa
Next we consider 3 way crosses. Represent 2 random members of a 3 way cross (1
2) 3 by
1a
1b
&
&
xa
xb
&
2a
%
pa
&
2b
pb
3a
3b
This section is concerned with a model in which the cross, line i line j, is assumed the
same as the cross, line j line i. The model is
0
2
f a2 + f 2 d2 + f 3 ad
+ etc.
Cov(cij , cji ).
2
Cov(cij , cji0 ) = .5 f a2 + .25 f 2 aa
+ . . . etc.
0.
We illustrate BLUP with single crosses among 4 lines with f = .6, a2 = .4, d2 = .3,
e2 = 1. All other genetic covariances are ignored. = . The number of observations
per cross and y ij , are
X
4
4
2
nij
5 3
X 6
2 X
3 9
2 X
3 5
5 6
X 5
y ij.
6 4
X 3
7 X
6 4
7
8
3
X
X denotes no observation. The OLS equations are in (30.1). Note that aij is combined
with aji to form the variable aij and similarly for d.
48 9 7 4
9 0 0
7 0
8
0
0
0
8
6 14 9 7 4 8 6 14
0 0 9 0 0 0 0 0 a12
0 0 0 7 0 0 0 0
a13
0 0 0 0 4 0 0 0
a14
0 0 0 0 0 8 0 0
a
23
a
6 0 0 0 0 0 6 0
24
14 0 0 0 0 0 14
a34
9 0 0 0 0 0
d12
7 0 0 0 0
d13
4 0 0 0
d14
8 0 0
d23
6 0 d24
14
d34
235
50
36
24
32
42
51
50
36
24
32
42
51
(1)
.24
0
.12
.12
, V ar(d) = .108 I.
V ar(a) =
.24 .12 .12
.24 .12
.24
V ar(a) is singular. Consequently we pre-multiply equation (30.1) by
1
0
0
0 V ar(a) 0
0
0
I
and add
0 0
0
0
0 I
0 0 [V ar(d)]1
to the resulting coefficient matrix. The solution to these equations is
= 5.1100,
0 = (.5528, .4229, .4702, .4702, .4229, .5528),
a
0 = (.0528, .1962, .1266, .2965, .5769, .5504).
d
Note that
is
d = 0. Now the predicted future progeny average of the ij th and jith cross
+ a
ij + dij ,
, d
k i, where k is not in the sample, we can do this by selection index methods using a
as the data with variances and covariances applying to a + d rather than a. See Section
5.9. For example the prediction of the 1 5 cross is
0)
.348
.12
.348
.12
.12
.348
.12
.12
0
.348
.12
0
.12
.12
.348
0
.12
.12
.12
.12
.348
(
a + d).
(2)
In most animal breeding models one would assume that because of maternal effects the
ij th cross would be different from the jith . Now the genetic model for maternal effects
involves the genetic merit with respect to maternal of the female line in a single cross.
This complicates statements of the variances and covariances contributed by different
genetic components since the lines are inbred. The statement of a2 is possible but not
the others. The contribution of a2 is
Covariance between 2 progeny of the same cross
= 2f a2 ,
= .5f a2 ,
where the second subscript denotes the female line. Consequently if we ignore other
2
. We illustrate
components, we need only to add mj to the model with V ar(m) = Im
with the same data as in Section 30.3 with V ar(m) = .5 I. The OLS equations now are
in (30.3). Now we pre-multiply these equations by
1
0
0 V ar(a)
0
0
0
0
0
0
I
0
0
0
0
I
0
0
0
0
0
0
0
I
0
0
1
0 [V ar(d)]
0
0
0
[V ar(m)]1
= 5.1999,
0 = (.2988, .2413, .3217, .3217, .2413, .2988),
a
0 = (.1737, .2307, .1136, .1759, .4479, .4426),
d
and
0 = (.0560, .6920, .8954, .1475).
m
48 9 7 4
9 0 0
7 0
8
0
0
0
8
6 14 9 7 4 8 6 14 10 10 18 10
0 0 9 0 0 0 0 0 4 5 0 0
0 0 0 7 0 0 0 0 4 0 3 0
0 0 0 0 4 0 0 0 2 0 0 2
0 0 0 0 0 8 0 0 0 2 6 0
6 0 0 0 0 0 6 0 0 3 0 3
14 0 0 0 0 0 14 0 0 9 5
9 0 0 0 0 0 4 5 0 0
a
7 0 0 0 0 4 0 3 0
d
4 0 0 0 2 0 0 2
m
8 0 0 0 2 6 0
6 0 0 3 0 3
14 0 0 9 5
10 0 0 0
10 0 0
18 0
10
= (235, 50, 36, 24, 32, 42, 51, 50, 36, 24, 32, 42, 51, 54, 62, 66, 53)0 .
(3)
If single crosses are used as the maternal parent in crossing, we can utilize various components of genetic variation with respect to maternal effects, for then the maternal parents
are non-inbred.
Breed Crosses
If one set of breeds is used as males and a second different set is used as females in a
breed cross, the problem is the same as for any two way fixed cross-classified design with
interaction and possible missing subclasses. If there is no missing subclass, the weighted
squares of means analysis would seem appropriate, but with small numbers of progeny
per cross, y ij may not be the optimum criterion for choosing the best cross. Rather, we
might choose to treat the interaction vector as a pseudo-random variable and proceed
to a biased estimation that might well have smaller mean squared error than the y ij . If
subclasses are missing, this biased procedure enables finding a biased estimator of such
crosses.
If the same breeds are used as sires and as dams and with progeny of some or all of the
pure breeds included in the design, the analysis can be more complicated. Again one
possibility is to evaluate a cross or pure line simply by the subclass mean. However,
most breeders have attempted a more complicated analysis involving, for example, the
following model for ij the true mean of the cross between the ith sire breed and the j th
dam breed.
ij = + si + dj + ij + p if i = j
= + si + dj + ij if i 6= j.
From the standpoint of ranking crosses by BLUE, this model is of no particular value,
for even with filled subclasses the rank of the coefficient matrix is only b2 , where b is the
number of breeds. A solution to the OLS equations is
o = so = do = po = 0
ij = y ij .
Thus BLUE of a breed cross is simply y ij , provided nij > 0. The extended model provides
no estimate of a missing cross since that is not estimable. In contrast, if one is prepared
to use biased estimation, a variety of estimates of missing crosses can be derived, and
these same biased estimators may, in fact, be better estimators of filled subclasses than
y ij . Let us restrict ourselves to estimators of ij that have expectation, + si + dj + p
+ linear function of if i = j, or + si + dj + linear function of if i 6= j. Assume that
the ii are different from the ij (i 6= j). Accordingly, let us assume for convenience that
b
X
ij = 0 for i = 1, . . . , b,
j=1
b
X
i=1
b
X
ij = 0 for j = 1, . . . , b, and
ii = 0.
i=1
Next permute all labelling of breeds and compute the average squares and products of
the ij . These have the following form:
Av.(ii )2 = d.
Av.(ij )2 = c.
Av.(ii jj ) = d/(b 1).
8
Av.(ij ik ) = Av.(ij kj ) =
d c(b 1)
.
(b 1)(b 2)
d/(b 1).
d/(b 1).
r.
2d/(b 1)(b 2).
d r(b 1)
Av.(ij ki ) =
.
(b 1)(b 2)
(c + r)(b 1) 4d
Av.(ij kl ) =
.
(b 1)(b 2)(b 3)
Av.(ij jk ) = Av.(ij ki ).
Av.(ii ij )
Av.(ii ji )
Av.(ij ji )
Av.(ii jk )
=
=
=
=
These squares and products comprise a singular P matrix which could then be used in
pseudo-mixed model equations. This would, of course, require estimating d, c, r from the
data. Solving the resulting mixed model type equations,
ii = o + soi + doi + ii + po ,
ij = o + soi + doi + ij ,
when i 6= j.
A simpler method is to pretend that the model for ij is
ij = + si + dj + ij + r(i,j) ,
when i 6= j, and
ii = + si + dj + ii + p.
r has b(b 1)/2 elements and (ij) denotes i < j. Thus the element of r for ij is the same
as for ji . Then partition into the ii elements and the ij elements and pretend that
and r are random variables with
11
V ar 22
= I1 , V ar
..
.
12
The covariances between these three vectors are all null. Then set up and solve the mixed
model equations. With proper choices of values of 12 , 22 , 32 relative to b, d, c, r the
estimates of the breed crosses are identical to the previous method using singular P. The
latter method is easier to compute and it is also much easier to estimate 12 , 22 , 32 than
the parameters of P. For example, we could use Method 3 by computing appropriate
reductions and equating to their expectations.
We illustrate these two methods with a 4 breed cross. The nij and yij. were as follows.
9
5
4
3
4
nij
2 3
2 6
5 2
2 3
1
7
8
4
6
8
9
2
yij.
3 2
3 5
4 7
6 8
7
6
3
6
Assume that P is the following matrix, (30.4) . . . (30.6). V ar(e) = I. Then we premultiply
the OLS equations by
!
I 0
0 P
and add I to the last 16 diagonal coefficients.
Upper 8 8
1.8
.6
.6
.6 .6 .6
.6
.6
4.48 1.94 1.94
.88 .6 .14 .14
4.48 .14
.6
1.48 1.94
1.8
.6
.6
4.48 1.94
4.48
(4)
.6
.6 .6
.6
.6
.6
.6 .6
.14 1.94
.6
1.48 .14 1.94
1.48
.6
.14
1.48
.6 1.94
.88 .14 .14 .6
1.94 .14
.6
1.48 1.94 .14
1.48
.6
.6
.6 .6
.6
.6
.6
.6 .6
.14
.88 .6 .14
1.48 .14 1.94
.6
1.48 .14
.6 1.94 .14
.88 .14 .6
(5)
Lower right 8 8
4.48 .6 1.94
1.48 1.94 .14
.6
1.8
.6
.6
.6
.6
.6
4.48 1.94 .6
4.48 .6
1.8
10
(6)
=
=
=
=
0,
(2.923, 1.713, 2.311, 2.329)0 ,
(.652, .636, .423, 0)0 ,
.007.
in tabular form is
The resulting
ij are
ij can be
Note that these
ij 6= y ij but are not markedly different from them. The same
obtained by using
V ar( ii ) = 2.88 I,
V ar( ij ) = 7.2 I,
V ar(r) = 2.64 I.
The solution to these mixed model equations is different from before, but the resulting
ij are identical. Ordinarily one would not accept a negative variance. The reason for
this in our example was a bad choice of the parameters of P. The OLS coefficient matrix
for this solution is in (30.7) . . . (30.9). The right hand sides are (18, 22, 23, 22, 25, 16,
22, 22, 6, 3, 2, 7, 8, 3, 5, 6, 9, 4, 7, 3, 2, 6, 8, 6, 11, 11, 9, 9, 12, 11). o and do4 are
deleted giving a solution of 0 for them. The OLS equations for the preceding method are
the same as these except the last 6 equations and unknowns are deleted. The solution is
o
so
do
po
ro
=
=
=
=
=
0,
(1.460, 1.379, 2.838, 1.058)0 ,
(.844, .301, 1.375, 0)0 ,
.007,
(.253, .239, 1.125, .888, .220, .471)0 .
in tabular form =
Upper left 15 15
11
0
19
0
0
18
0
0
0
13
5
4
3
4
16
2
2
5
2
0
11
3
6
2
3
0
0
14
5
2
2
4
5
2
2
13
5
0
0
0
5
0
0
5
5
2
0
0
0
0
2
0
0
0
2
3
0
0
0
0
0
3
0
0
0
3
1
0
0
0
0
0
0
0
0
0
0
1
0
4
0
0
4
0
0
0
0
0
0
0
4
0
2
0
0
0
2
0
2
0
0
0
0
0
2
0
6
0
0
0
0
6
0
0
0
0
0
0
0
6
(7)
0
7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
0
3
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
5
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
2
2
0
0
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
4
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
2
0
0
0
0
0
0
0
0
0
0
0
0
3
0
0
3
0
0
0
0
0
0
0
0
12
0
0
0
4
0
0
0
4
0
0
0
0
0
0
0
2
4
0
0
4
2
0
0
0
2
0
0
4
0
0
3
0
3
0
3
0
3
0
0
0
3
0
0
0
0
1
0
0
4
4
0
0
0
0
0
0
1
0
0
0
0
6
5
0
0
5
6
0
0
0
0
0
0
0
6
0
7
0
2
0
2
0
0
0
0
0
0
0
0
0
0
0
8
3
0
0
3
0
0
0
0
0
0
0
0
(8)
Lower right 15 15
7 0 0 0
3 0 0
5 0
0
0
0
0
8
0
0
0
0
0
4
0
0
0
0
0
0
2
0 0
0 0
0 0
0 0
0 0
0 0
0 0
3 0
dg (4
0
0
0
0
0
0
0
0
6
0
3
0
0
0
0
0
0
6
0 0
0 0
0 5
0 0
0 0
4 0
0 0
0 0
5 11
7
0
0
0
0
0
0
0
0
8
0
0
2
0
0
3
9 11)
(9)
The method just preceding is convenient for missing subclasses. In that case ij
associated with nij = 0 are included in the mixed model equations.
13
Chapter 31
Maternal Effects
C. R. Henderson
1984 - Guelph
Many traits are influenced by the environment contributed by the dam. This is
particularly true for traits measured early in life and for species in which the dam nurses
the young for several weeks or months. Examples are 3 month weights of pigs, 180 day
weights of beef calves, and weaning weights of lambs. In fact, genetic merit for maternal
ability can be an important trait for which to select. This chapter is concerned with some
models for maternal effects and with BLUP of them.
Maternal effects can be estimated only through the progeny performance of a female or the
progeny performance of a related female when direct and maternal effects are uncorrelated.
If they are correlated, maternal effects can be evaluated whenever direct can be. Because
the maternal ability is actually a phenotypic manifestation, it can be regarded as the sum
of a genetic effect and an environmental effect. The genetic effect can be partitioned at
least conceptually into additive, dominance, additive additive, etc. components. The
environmental part can be partitioned, as is often done for lactation yield in dairy cows,
into temporary and permanent environmental effects. Some workers have suggested that
the permanent effects can be attributed in part to the maternal contribution of the dam
of the dam whose maternal effects are under consideration.
Obviously if one is to evaluate individuals for maternal abilities, estimates of the
underlying variances and covariances are needed. This is a difficult problem in part due
to much confounding between maternal and direct genetic effects. BLUP solutions are
probably quite sensitive to errors in estimates of the parameters used in the prediction
equations. We will illustrate these principles with some examples.
Sex
Male
Female
Female
Female
Male
Male
Male
Male
Sire
Unknown
Unknown
1
1
Unknown
Unknown
6
5
Dam
Record
Unknown
6
Unknown
9
2
4
2
7
Unknown
8
Unknown
3
3
6
4
8
1 0 .5 .5
1 .5 .5
1 .5
0
0
0
0
1
0 .25 .25
0 .25 .25
0 .5 .25
0 .25
.5
0
0
.5
1 .5
0
1 .125
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
2
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
(1)
Cols. 2-9 represent a and cols 10-17 represent m. This gives the following OLS equations.
8 1 1 1
1 0 0
1 0
1
0
0
0
1
1
0
0
0
0
1
1
0
0
0
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
G=
2
0
0
1
1
0
0
0
0
0
2
1
0
0
0
0
0
0
1
0
0
0
1
1
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
.5A .2A
.2A .4A
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
a
=
51
6
9
4
7
8
3
6
8
0
11
6
8
0
0
0
0
(2)
Adding the inverse of G to the lower 16 16 submatrix of (31.2) gives the mixed model
equations, the solution to which is
= 6.386,
= (.241, .541, .269, .400, .658, 1.072, .585, .709)0 ,
a
= (.074, .136, .144, .184, .263, .429, .252, .296)0 .
m
In contrast, if covariance (a, m0 ) = 0, the maternal predictions of 5 and 6 are 0. With
2
2
a2 = .5, m
= .4, am
= 0 the solution is
= 6.409,
= (.280, .720, .214, .440, .659, 1.099, .602, .742)0 ,
a
= (.198, .344, .029, .081, 0, 0, .014, .040)0 .
m
Note now that 5 and 6 cannot be evaluated for m since they are males and have no female
relatives with progeny.
If we assume that additive and dominance affect both direct and maternal merit, the
incidence matrix of (31.1) is augmented on the right by the last 16 columns of (31.1)
giving an 8 33 matrix. Assume the same additive direct and maternal parameters as
before and let the dominance parameters be .3 for direct variance, .2 for maternal, and .1
for their covariance. Then
G=
.5A .2A
0
0
.2A .4A
0
0
0
0 .3D .1D
0
0 .1D .2D
The solution is
a direct
a maternal
d direct
d maternal
=
=
=
=
=
6.405,
(.210, .478, .217, .350, .545, .904, .503, .588)0 ,
(.043, .083, .123, .156, .218, .362, .220, .243)0 ,
(.045, .392, .419, .049, .242, .577, .069, .169)0 ,
(.015, .078, .078, .119, .081, .192, .023, .056)0 .
d(direct)
D d(direct),
0
1
d(direct)
D d(maternal),
0 1
d(maternal) D d(maternal),
0 e
.
e
Of course the data of our example would be quite inadequate to estimate these variances
and covariances.
Chapter 32
Three Way Mixed Model
C. R. Henderson
1984 - Guelph
Some of the principles of preceding chapters are illustrated in this chapter using an
unbalanced 3 way mixed model. The method used here is one of several alternatives that
appeals to me at this time. However, I would make no claims that it is best.
The Example
A 11
1 5
2 1
3 0
12
2
2
4
13
3
4
8
BC subclasses
21 22 23
6 0 3
0 5 2
2 3 5
31
2
3
7
32
5
6
0
33
0
0
0
3 5 2 4 8 9 2
5 6 7 8 5 2 6 .
9 8 4 3 7 5
Because there are no observations on bc33 , estimates and tests of b, c, and b c that mimic
the filled subclass case cannot be accomplished using unbiased estimators. Accordingly,
we might use some prior on squares and products of bcjk and obtain biased estimators.
2
2
Let us assume the following prior values, e2 /a2 = 2, e2 /ab
= 3, e2 /ac
= 4, e2 /pseudo
2
2
bc
= 6, e2 /abc
= 5.
The OLS equations that include missing observations have 63 unknowns as follows
a
b
c
ab
13
ac 19 27
46
bc 28 36
79
abc 37 63
10 18
of
c
1
2
3
1
3
1
2
1
2
3
2
3
1
2
2
3
1
2
3
1
Cols. with 1
1,4,7,10,19,28,37
1,4,8,10,20,29,38
1,4,9,10,21,30,39
1,5,7,11,19,31,40
1,5,9,11,21,33,42
1,6,7,12,19,34,43
1,6,8,12,20,35,44
2,4,7,13,22,28,46
2,4,8,13,23,29,47
2,4,9,13,24,30,48
2,5,8,14,23,32,50
2,5,9,14,24,33,51
2,6,7,15,22,34,52
2,6,8,15,23,35,53
3,4,8,16,26,29,56
3,4,9,16,27,30,57
3,5,7,17,25,31,58
3,5,8,17,26,32,59
3,5,9,17,27,33,60
3,6,7,18,25,34,61
Let N be a 20 20 diagonal matrix with filled subclass numbers in the diagonal, that is
0
N = diag(5,2,. . . ,5,7). Then the OLS coefficient matris is W NW, and the right hand
side vector is W0 Ny, where y = (3 5 . . . 7 5)0 . The right hand side vector is (107, 137,
187, 176, 150, 105, 111, 153, 167, 31, 48, 28, 45, 50, 42, 100, 52, 35, 57, 20, 30, 11, 88, 38,
43, 45, 99, 20, 58, 98, 32, 49, 69, 59, 46, 0, 15, 10, 6, 24, 0, 24, 18, 10, 0, 5, 12, 28, 0, 40,
10, 6, 36, 0, 0, 36, 64, 8, 9, 35, 35, 0, 0).
Now we add the following diagonal matrix to the coefficient matrix, (2, 2, 2, 0, 0, 0,
0, 0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5,
2
=
=
=
=
From these results the biased prediction of subclass means are in (32.1).
A
1
2
3
C1
3.265
4.420
6.222
B1
C2
4.135
6.971
8.234
C3
C1
3.350 4.324
6.550 3.957
7.982 3.928
B2
C2
4.622
7.331
4.217
C3
C1
6.888 6.148
6.229 3.038
6.754 5.006
B3
C2
2.909
5.667
4.965
C3
5.439
5.348
6.549
(1)
Note that these are different from the y ijk for filled subclasses, the latter being BLUE.
Also subclass means are predicted for those cases with no observations.
Tests Of Hypotheses
Suppose we wish to test the following hypotheses regarding b, c, and bc. Let
jk = + bj + ck + bcjk .
We test j. are equal, .k are equal, and that all jk - j. - .k + .. are equal. Of course
these functions are not estimable if any jk subclass is missing as is true in our example.
Consequently we must resort to biased estimation and accompanying approximate tests
based on estimated MSE rather than sampling variances. We assume that our priors are
the correct values and proceed for the first test.
K0 =
1 1 1 0 0 0 1 1 1 1 1 1 0 0
0 0 0 1 1 1 1 1 1 0 0 0 1 1
0 1 1 1 1 1 1 0 0 0 1 1 1
1 1 1 1 0 0 0 1 1 1 1 1 1
3
where is the vector of ijk ordered k in j in i. From (32.1) the estimate of these functions
is (6.05893, 3.18058). To find the mean squared errors of this function we first compute
0
the mean squared errors of the ijk . This is WCW P, where W is the matrix relating
ijk subclass means to the 63 elements of our model. C is a g-inverse of the mixed model
coefficient matrix. Then the mean squared error of K0 is
K0 PK =
Then
17.49718 13.13739
16.92104
K0 =
1 1 0 1 1 0 1 1 0 1 1 0 1
1 0 1 1 0 1 1 0 1 1 0 1 1
K0 o = (14.78060, 6.03849)0 ,
with
MSE =
17.25559 10.00658
14.13424
This gives the test criterion = 13.431, distributed approximately as 2 with 2 d.f. under
the null hypothesis.
For B C interaction we use
K0 =
1
0
0
0
0 1 0 0
0 1
0 1 1
1 1 0 0
0
0 1 1 0
0
0 1 0 1 1
0 1 0
0
0 0 1 1
0 1 1 0
0 1
0 1 1
0
0 1 1 0
1 1
0 1 0
1
0 1 1 0
0 1 0 0
1 1 0 0
0
0 1 0
0
0 0 1
0 1 0 0
0 1
0 1
1 1 0 0
0
0 1 1
0
0 1 0 1 1
0 1
0
0 0 1 1
0 1 1
This gives
K0 o = (.83026, 5.25381, 4.51772, .09417)0 ,
with
6.37074 4.31788 4.56453 3.64685
MSE =
6.32592 4.23108
6.31457
The test criterion is 21.044 distributed approximately as 2 with 4 d.f. under the null
hypothesis.
Note that in these examples of hypothesis testing the priors used were quite arbitrary.
The tests are of little value unless one has good prior estimates. This of course is true for
any unbalanced mixed model design.
a2
0
a
+ tr 1.747
.28645 .10683
= a
/3 = .669
.28558
.2346
..
0
2
a
+ tr 1.747
b
b
ab
=
a
ac
bc
abc
.26826
.18846
...
.16607
.16505
.. . .
0
bc a
bc + tr 1.747
= a
.
.
/9 = .847.
...
.14138
0
.. . .
= bc bc + tr 1.747
.
.
.19027
0
.. . .
c a
c + tr 1.747
= a
.
.
...
/9 = .580.
...
.20000
/9 = .357.
/127 = .534.
e2
a2
2
ab
2
ac
2
bc
2
abc
1
1.747
.169
.847
.580
.357
.534
2
3
4
1.470 1.185 .915
.468 .330 .231
.999 1.090 1.102
.632 .638 .587
.370 .362 .327
.743 1.062 1.506
2
It appears that
e2 and
abc
may be highly confounded, and convergence will be slow. Note
2
2
that
e +
abc does not change much.
Chapter 33
Selection When Variances are Unequal
C. R. Henderson
1984 - Guelph
The mixed model equations for BLUP are well adapted to deal with variances that
differ from one subpopulation to another. These unequal variances can apply to either e
or to u or a subvector of u. For example, cows are to be selected from several herds, but
the variances differ from one herd to another. Some possibilities are the following.
1. a2 , additive genetic variance, is the same in all herds but the within herd e2 differ.
2. e2 is constant from one herd to another but intra-herd a2 differ.
3. Both a2 and e2 differ from herd to herd, but a2 /e2 is constant. That is, intra-herd h2
is the same in all herds, but the phenotypic variance is different.
4. Both a2 and e2 differ among herds and so does a2 /e2 .
=
=
=
=
si + hj + eijk .
As2 ,
Ie2 ,
0.
h is fixed. Suppose, however, that we assume, probably correctly, that within herd e2
varies from herd to herd, probably related to the level of production. Suppose also that s2
is influenced by the herd. That is, in the population of sires s2 is different when sires are
used in herd 1 as compared to s2 when these same sires are used in herd 2. Suppose further
that s2 /e2 is the same for every herd. This may be a somewhat unrealistic assumption,
but it may be an adequate approximation. We can treat this as a multiple trait problem,
trait 1 being progeny values in herd 1, trait 2 being progeny values in herd 2, etc. For
purposes of illustration let us assume that all additive genetic correlations between pairs
of traits are 1. In that case if the true rankings of sires for herd 1 were known, then these
would be the true rankings in herd 2.
1
R=
Iv1 0 . . . 0
0 Iv2 . . .
..
.
..
. ..
.
0 . . . . . . Ivt
G=
where vi /wii is the same for all i = 1, . . . , t. Further wij = (wii wjj ).5 . This is, of course,
an oversimplified model since it does not take into account season and year of freshening.
It would apply to a situation in which all data are from one year and season.
We illustrate this model with a small set of data.
Sires
1
2
3
1
5
3
0
nij
2
8
4
5
3 1
0 6
7 5
9 -
yij
2 3
12 8 9
10 12
1 .5 .5
1 .25
A=
.
1
e2 for the 3 herds is 48, 108, 192, respectively. V ar(s) for the 3 herds is
4A 6A 8A
9A 12A
.
16A
Upper left 6 6
diag(.10417, .06250, 0, .07407, .03704, .04630).
(1)
0
0
0
0
0
0
0
0
0
0
0
0
0 .10417
0
0
0 .06250
0
0
0
0
0
0
0
0
.07407 0
0
0
.03704 0
0
0
.04630 0
(2)
Lower right 6 6
0
.03646
0
0
.04687
0
0
0
.16667
0
0
0 .03646
0 .04687
.
0
0
.15741
0
.08333
(3)
9A 12A 0
16A 0
I3
and add 1 to each of the first 9 diagonal elements. Solving these equations the solution
is (-.0720, .0249, .0111, -.1080, .0373, .0166, -.1439, .0498, .0222, 1.4106, 1.8018, 1.2782)0 .
Note that si1 /
si2 = 2/3, si1 /
si3 = 1/2, si2 /
si3 = 3/4. These are in the proportion
.5 .5
.5
(2:3:4) which is (4 :9 :16 ). Because of this relationship we can reduce the mixed model
equations to a set involving si1 and hj by premultiplying the equations by
1.
0
0
0
0
0
0
1.
0
0
0
0
0 1.5
0
0 2.
0
0 1.5
0 0
1.
0
0 1.5 0
0
0
0
0 0
0
0
0
0 0
0
0
0
0 0
0
2.
0
0
0
0
0
0
2.
0
0
0
0
0
0
1.
0
0
0
0
0
0
1.
0
0
0
0
0
0
1.
(4)
Then the resulting coefficient matrix is post-multiplied by the transpose of matrix (33.4).
s11
s21
s31
1
h
2
h
3
h
16.766
15.104
14.122
.229
.278
.109
(5)
The solution is (-.0720, .0249, .0111, 1.4016, 1.8018, 1.2782)0 . These are the same as
before.
How would one report sire predictions in a problem like this? Probably the logical
thing to do is to report them for a herd with average e2 . Then it should be pointed out
that sires are expected to differ more than this in herds with large e2 and to differ less in
herds with small e2 . A simpler method is to set up equations at once involving only si1
or any other chosen sij (j fixed). We illustrate with si1 . The W matrix for our example
with subclass means ordered sires in herds is
1
0
0 1 0
0
1
0 1 0
1.5
0
0 0 1
0 1.5
0 0 1
0
0 1.5 0 1
0
2
0 0 0
0
0
2 0 0
0
0
0
0
0
1
1
This corresponds to si2 = 1.5 si1 , and si3 = 2 si1 . Now compute the diagonal matrix
diag(5, 3, 8, 4, 5, 7, 9) [dg(48, 48, 108, 108, 108, 192, 192)]1 D.
0
Then the GLS coefficient matrix is W DW and the right hand side is W Dy, where y is
the subclass mean vector. This gives
.2708
0
.2917
s11
0 .1042 .1111
0
s21
0 .0625 .0556 .0729
s31
.2917
0 .0694 .0937
.1667
0
0 h1
.1574
0
h2
3
.0833
h
.2917
.3090
.2639
.2292
.2778
.1094
(6)
Then add (4A)1 to the upper 3 x 3 submatrix of (33.6) to obtain mixed model equations.
Remember 4A is the variance of the sires in herd 1. The solution to these equation is as
before, (-.0720, .0249, .0111, 1.4106, 1.8018, 1.2782)0 .
Next we illustrate inter-herd joint cow and sire when herd variances are unequal. We
assume a simple model
yij = hi + aj + eij .
h is fixed, a is additive genetic merit with
V ar(a) =
A is the numerator relationship for all animals. There are t herds, and we treat production
as a different trait in each herd. We assume genetic correlations of 1. Therefore gij =
(gii gjj ).5 .
Iv1
0
Iv2
.
V ar(e) =
..
Ivt
First we assume a2 /e2 is the same for all herds. Therefore gii /vi is the same for all herds.
As an example suppose that we have 2 herds with cows 2, 3 making records in herd
1 and cows 4, 5 making records in herd 2. These animals are out of unrelated dams, and
the sire of 2 and 4 is 1. The records are 3, 2, 5, 6.
1 .5 0 .5
1 0 .25
1
0
A=
0
0
0
0
1
Ordering the data by cow number and the unknowns by h1 , h2 , a in herd 1, a in herd 2
the incidence matrix is
1
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
Suppose that
G=
4A 8A
8A 16A
R=
12I
0
0 48I
Then e2 /a2 = 3 in each herd, implying that h2 = .25. Note that G is singular so the
method for singular G is used. With these parameters the mixed model solution is
= (2.508, 5.468).
h
in herd 1 = (.030, .110, .127, .035, .066).
a
in herd 2 = (.061, .221, .254, .069, .133).
a
Note that a
i in herd 2 = 2 a
i in herd 1 corresponding to (16/4).5 = 2.
A simpler method is to use an incidence matrix as follows.
1
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
2
0
0
0
0
2
2 is 2 times a in herd 1.
Now suppose that G is the same as before but e2 = 12,24 respectively. Then h2 is
higher in herd 2 than in herd 1. This leads again to the a
in herd 2 being twice a
in herd
1, but the a
for cows making records in herd 2 are relatively more variable, and if we were
selecting a single cow, say for planned mating, the chance that she would come from herd
2 is increased. The actual solution in this example is
= (2.513, 5.468).
h
in herd 1 = (.011, .102, .128, .074, .106).
a
in herd 2 = twice those in herd 1.
a
The only reason we can compare cows in different herds is the use of sires across herds.
A problem with the methods of this chapter is that the individual intra-herd variances
must be estimated with limited data. It would seem, therefore, that it might be advisable to take as the estimate for an individual herd, the estimate coming from that herd
regressed toward the mean of variances of all herds, the amount of regression depending
upon the number of observations. This would imply, perhaps properly, that intra- herd
variances are a sample of some population of variances. I have not derived a method
comparable to BLUP for this case.