Book Mixed Model Henderson

Download as pdf or txt
Download as pdf or txt
You are on page 1of 384

Applications of Linear Models

in Animal Breeding
Charles R. Henderson

Chapter 1
Models
C. R. Henderson
1984 - Guelph

This book is concerned exclusively with the analysis of data arising from an experiment or sampling scheme for which a linear model is assumed to be a suitable approximation. We should not, however, be so naive as to believe that a linear model is always
correct. The important consideration is whether its use permits predictions to be accomplished accurately enough for our purposes. This chapter will deal with a general
formulation that encompasses all linear models that have been used in animal breeding
and related fields. Some suggestions for choosing a model will also be discussed.
All linear models can, I believe, be written as follows with proper definition of the
various elements of the model. Define the observable data vector with n elements as y.
In order for the problem to be amenable to a statistical analysis from which we can draw
inferences concerning the parameters of the model or can predict future observations it is
necessary that the data vector be regarded legitimately as a random sample from some
real or conceptual population with some known or assumed distribution. Because we
seldom know what the true distribution really is, a commonly used method is to assume
as an approximation to the truth that the distribution is multivariate normal. Analyses
based on this approximation often have remarkable power. See, for example, Cochran
(1937). The multivariate normal distribution is defined completely by its mean and by
its central second moments. Consequently we write a linear model for y with elements in
the model that determine these moments. This is
y = X + Zu + e.
X is a known, fixed, n p matrix with rank = r minimum of (n, p).
is a fixed, p 1 vector generally unknown, although in selection index methodology
it is assumed, probably always incorrectly, that it is known.
Z is a known, fixed, n q matrix.
u is a random, q 1 vector with null means.
e is a random, n 1 vector with null means.

The variance-covariance matrix of u is G, a q q symmetric matrix that is usually


non-singular. Hereafter for convenience we shall use the notation V ar(u) to mean a
variance-covariance matrix of a random vector.
V ar(e) = R is an n n, symmetric, usually non-singular matrix. Cov(u, e0 ) = 0,
that is, all elements of the covariance matrix for u with e are zero in most but not all
applications.
It must be understood that we have hypothesized a population of u vectors from
which a random sample of one has been drawn into the sample associated with the data
vector, y, and similarly a population of e vectors is assumed, and a sample vector has
been drawn with the first element of the sample vector being associated with the first
element of y, etc.
Generally we do not know the values of the individual elements of G and R. We
usually are willing, however, to make assumptions about the pattern of these values. For
example, it is often assumed that all the diagonal elements of R are equal and that all
off-diagonal elements are zero. That is, the elements of e have equal variances and are
mutually uncorrelated. Given some assumed pattern of values of G and R, it is then
possible to estimate these matrices assuming a suitable design (values of X and Z) and
a suitable sampling scheme, that is, guarantee that the data vector arose in accordance
with u and e being random vectors from their respective populations. With the model
just described
E(y) = mean of y = X.
V ar(y) = ZGZ0 + R.
We shall now present a few examples of well known models and show how these can
be formulated by the general model described above. The important advantage to having
one model that includes all cases is that we can thereby present in a condensed manner
the basic methods for estimation, computing sampling variances, testing hypotheses, and
prediction.

Simple Regression Model

The simple regression model can be written as follows,


y i = + xi + e i .
This is a scalar model, yi being the ith of n observations. The fixed elements of the model
are and , the latter representing the regression coefficient. The concomitant variable
associated with the ith observation is xi , regarded as fixed and measured without error.
2

Note that in conceptual repeated sampling the values of xi remain constant from one
sample to another, but in each sample a new set of ei is taken, and consequently the
values of yi change. Now relative to our general model,
y0 = (y 1 y 2 ... yn ),
0 = ( ),
"
#
1 1 ... 1
0
X =
, and
x1 x2 ... xn
e0 = (e1

e2 ... en )

Zu does not exist in the model. Usually R is assumed to be Ie2 in regression models.

One Way Random Model

Suppose we have a random sample of unrelated sires from some population of sires and
that these are mated to a sample of unrelated dams with one progeny per dam. The
resulting progeny are reared in a common environment, and one record is observed on
each. An appropriate model would seem to be
yij = + si + eij ,
yij being the observation on the j th progeny of the ith sire.
Suppose that there are 3 sires with progeny numbers 3, 2, l respectively. Then y is a
vector with 6 elements.
y0
x0
u0
e0
V ar(u)
V ar(e)

=
=
=
=
=
=

(y11 y12 y13 y21 y22 y31 ),


(1 1 1 1 1 1),
(s1 s2 s3 ), and
(e11 e12 e13 e21 e22 e23 ),
Is2 ,
Ie2 ,

where these two identity matrices are of order 3 and 6, respectively.


Cov(u, e0 ) = 0.
Suppose next that the sires in the sample are related, for example, sires 2 and 3 are
half-sib progeny of sire l, and all 3 are non-inbred. Then under an additive genetic model
3

1 1/2 1/2

V ar(u) = 1/2 1 1/4 s2 .


1/2 1/4 1
What if the mates are related? Suppose that the numerator relationship matrix, Am ,
for the 6 mates is

1 1/2 1/2 0
0
0
0
1
0
0 1/2 1/2

1/2 0
1 1/4 0
0
.
1/2 0 1/4 1
0
0

0 1/2 0
0
1 1/4
0 1/2 0
0 1/4 1

Suppose further that we invoke an additive genetic model with h2 = 1/4. Then

V ar(e) =

1
0
1/30 1/30
0
0
0
1
0
0
1/30 1/30

1/30
0
1
1/60
0
0
2.
e
1/30
0
1/60
1
0
0

0
1/30
0
0
1
1/60
0
1/30
0
0
1/60
1

This result is based on s2 = y2 /16, e2 = 15 y2 /16, and leads to


V ar(y) = (.25 Ap + .75 I) y2 ,
where Ap is the relationship matrix for the 6 progeny.

Two Trait Additive Genetic Model

Suppose that we have a random sample of 5 related animals with measurements on 2 correlated traits. We assume an additive genetic model. Let A be the numerator relationship
matrix of the 5 animals. Let
g11 g12
g12 g22

be the genetic variance-covariance matrix and


r11 r12
r12 r22

be the environmental variance-covariance matrix. Then h2 for trait 1 is g11 /(g11 +r11 ), and
the genetic correlation between the two traits is g12 /(g11 g22 )1/2 . Order the 10 observations,
animals within traits. That is, the first 5 elements of y are the observations on trait 1.
Suppose that traits 1 and 2 have common means 1 , 2 respectively. Then
0

X =

1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1

and
0 = (1 2 ).
The first 5 elements of u are breeding values for trait 1 and the last 5 are breeding
values for trait 2. Similarly the errors are partitioned into subvectors with 5 elements
each. Then Z = I and
A g11 A g12
A g12 A g22

G = V ar(u) =

I r11 I r12
I r12 I r22

R = Var (e) =

where each I has order, 5.

Two Way Mixed Model

Suppose that we have a random sample of 3 unrelated sires and that they are mated to
unrelated dams. One progeny of each mating is obtained, and the resulting progeny are
assigned at random to two different treatments. The table of subclass numbers is

Sires
1
2
3

Treatments
1
2
2
1
0
2
3
0

Ordering the data by treatments within sires,


y0 =

y111 y112 y121 y221 y222 y311 y312 y313

Treatments are regarded as fixed, and variances of sires and errors are considered to
be unaffected by treatments. Then
u0 =

s1 s2 s3 st11 st12 st22 st31

Z =

1
1
1
0
0
0
0
0

0
0
0
1
1
0
0
0

0
0
0
0
0
1
1
1

1
1
0
0
0
0
0
0

0
0
1
0
0
0
0
0

0
0
0
1
1
0
0
0

0
0
0
0
0
1
1
1

2
V ar(s) = I3 s2 , V ar(st) = I4 st
, V ar(e) = I8 e2 .

Cov(s, (st0 )) = 0.
This is certainly not the only linear model that could be invoked for this design. For
example, one might want to assume that sire and error variances are related to treatments.

Equivalent Models

It was stated above that a linear model must describe the mean and the variancecovariance matrix of y. Given these two, an infinity of models can be written all of
which yield the same first and second moments. These models are called linear equivalent
models.
Let one model be y = X + Zu + e with V ar(u) = G, V ar(e) = R. Let a second
model be y = X + Z u + e , with V ar(u ) = G , V ar(e ) = R . Then the means
of y under these 2 models are X and X respectively. V ar(y) under the 2 models is
0

ZGZ0 + R and Z G Z + R .
6

Consequently we state that these 2 models are linearly equivalent if and only if
0

X = X and ZGZ0 + R = Z G Z + R .
To illustrate, X = X suppose we have a treatment design with 3 treatments
and 2 observations on each. Suppose we write a model
yij = + ti + eij ,
then

1
1
1
1
1
1

X =

1
1
0
0
0
0

0
0
1
1
0
0

0
0
0
0
1
1

t1

.
t
2

t3

An alternative model is
yij = i + eij ,
then

X =

1
1
0
0
0
0

0
0
1
1
0
0

0
0
0
0
1
1

1
2
.
3

Then if we define i = + ti , it is seen that E(y) is the same in the two models.
To illustrate with two models that give the same V ar(y) consider a repeated lactation
model. Suppose we have 3 unrelated, random sample cows with 3, 2, 1 lactation records,
respectively. Invoking a simple repeatability model, that is, the correlation between any
pair of records on the same animal is r, one model ignoring the fixed effects is
yij = ci + eij .

c1
r 0 0

2
V ar(c) = V ar
c2 = 0 r 0 y .
c3
0 0 r
7

V ar(e) = I6 (1 r) y2 .
An alternative for the random part of the model is
yij = eij ,
where Zu does not exist.

1
r
r
0
0
0

V ar() = R =

r
1
r
0
0
0

r
r
1
0
0
0

0
0
0
1
r
0

0
0
0
r
1
0

0
0
0
0
0
1

y2 .

Relating the 2 models,


2 = c2 + e2 .
Cov(ij , ij , ) = c2 for j 6= j 0 .
We shall see that some models are much easier computationally than others. Also
the parameters of one model can always be written as linear functions of the parameters
of any equivalent model. Consequently linear and quadratic estimates under one model
can be converted by these same linear functions to estimates for an equivalent model.

Subclass Means Model

With some models it is convenient to write them as models for the smallest subclass
mean. By smallest we imply a subclass identified by all of the subscripts in the model
except for the individual observations. For this model to apply, the variance-covariance
matrix of elements of e pertaining to observations in the same smallest subclass must
have the form

c
..

no covariates exist, and the covariances between elements of e in different subclasses must
be zero. Then the model can be written
8

+ Zu
+ .
= X
y
and Z
relate these means to elements
is the vector of smallest subclass means. X
y
of and u. The error vector, , is the mean of elements of e in the same subclass. Its
variance-covariance matrix is diagonal with the ith diagonal element being


v
ni

ni 1
ni

e2 ,

where ni is the number of observations in the ith subclass.

Determining Possible Elements In The Model

Henderson(1959) described in detail an algorithm for determining the potential lines of an


ANOVA table and correspondingly the elements of a linear model. First, the experiment
is described in terms of two types of factors, namely main factors and nested factors. By
a main factor is meant a classification, the levels of which are identified by a single
subscript. By a nesting factor is meant one whose levels are not completely identified
except by specifying a main factor or a combination of main factors within which the
nesting factor is nested. Identify each of the main factors by a single unique letter, for
example, B for breeds and T for treatments. Identify nesting factors by a letter followed by
a colon and then the letter or letters describing the main factor or factors within which it is
nested. For example, if sires are nested within breeds, this would be described as S:B. On
the other hand, if a different set of sires is used for each breed by treatment combination,
sires would be identified as S:BT. To determine potential 2 factor interactions combine
the letters to the left of the colon (for a main factor a colon is implied with no letters
following). Then combine the letters without repetition to the right of the colon. If no
letter appears on both the right and left of the colon this is a valid 2 factor interaction.
For example, factors are A,B,C:B. Two way combinations are AB, AC:B, BC:B. The third
does not qualify since B appears to the left and right of the colon. AC:B means A by
C interaction nested within B. Three factor and higher interactions are determined by
taking all possible trios and carrying out the above procedure. For example, factors are
(A, D, B:D, C:D). Two factor possibilities are (AD, AB:D, AC:D, DB:D, DC:D, BC:D).
The 4th and 5th are not valid. Three factor possibilities are (ADB:D, ADC:D, ABC:D,
DBC:D). None of these is valid except ABC:D. The four factor possibility is ADBC:D,
and this is not valid.
Having written the main factors and interactions one uses each of these as a subvector
of either or u. The next question is how to determine which. First consider main factors
and nesting factors. If the levels of the factor in the experiment can be regarded as a
9

random sample from some population of levels, the levels would be a subvector of u. With
respect to interactions, if one or more letters to the left of the colon represent a factor in
u, the interaction levels are subvectors of u. Thus interaction of fixed by random factors
is regarded as random, as is the nesting of random within fixed. As a final step we decide
the variance-covariance matrix of each subvector of u, the covariance between subvectors
of u, and the variance- covariance matrix of (u, e). These last decisions are based on
knowledge of the biology and the sampling scheme that produced the data vector.
It seems to me that modelling is the most important and most difficult aspect of
linear models applications. Given the model everything else is essentially computational.

10

Chapter 2
Linear Unbiased Estimation
C. R. Henderson
1984 - Guelph

We are interested in linear unbiased estimators of or of linear functions of , say


k . That is, the estimator has the form, a0 y, and E (a0 y) = k0 , if possible. It is
not necessarily the case that k0 can be estimated unbiasedly. If k0 can be estimated
unbiasedly, it is called estimable. How do we determine estimability?
0

Verifying Estimability
E(a0 y) = a0 X.

Does this equal k0 ? It will for any value of if and only if a0 X = k0 .


Consequently, if we can find any a such that a0 X = k0 , then k0 is estimable. Let
us illustrate with

1 1 2
1 2 4

X =
.
1 1 2
1 3 6
Is 1 estimable, that is, (1 0 0) estimable? Let a0 = (2 1 0 0) then
a0 X = (1 0 0) = k0 .
Therefore, k0 is estimable.
Is (0 1 2) estimable? Let a0 = (1 1 0 0) then
a0 X = (0 1 2) = k0 .
Therefore, it is estimable.
Is 2 estimable? No, because no a0 exists such that a0 X = (0 1 0).
Generally it is easier to prove by the above method that an estimable function is indeed estimable than to prove that a non-estimable function is non-estimable. Accordingly,
we consider other methods for determining estimability.
1

1.1

Second Method

Partition X as follows with possible re-ordering of columns.


X = (X1

X1 L),

where X1 has r linearly independent columns. Remember that X is n p with rank = r.


The dimensions of L are r (p r).
Then k0 is estimable if and only if
0

k = (k1
0

k1 L),

where k1 has r elements, and k1 L has p r elements. Consider the previous example.

X1 =

1
1
1
1

1
2
1
3

and L =

0
2

0
2

Is (1 0 0) estimable?
0

k1 = (1 0), k1 L = (1 0)

= 0.

Thus k0 = (1 0 0), and the function is estimable.


Is (0 1 2) estimable?
0

k1 = (0 1), and k1 L = (0 1)

0
2

0
2

= 2.

Thus k0 = (0 1 2), and the function is estimable.


Is (0 1 0) estimable?
0

k1 = (0 1), and k1 L = (0 1)
0

Thus (k1

1.2

= 2.

k1 L). = (0 1 2) 6= (0 1 0). The function is not estimable.

Third Method

A third method is to find a matrix, C, of order p (p r) and rank, p r, such that


XC = 0.
2

Then k0 is estimable if and only if


k0 C = 0.
In the example

1
1
1
1

1
2
1
3

2
4
2
6

2
1

0
0
0
0

Therefore (1 0 0) is estimable because

(1 0 0) 2

0.

0.

So is (0 1 2) because

(0 1 2)

2
1

But (0 1 0) is not because

(0 1 0)
2
1

1.3

2 6= 0.

Fourth Method

A fourth method is to find some g-inverse of X0 X, denoted by (X0 X) . Then k0 is


estimable if and only if
k0 (X0 X) X0 X = k0 .
A definition of and methods for computing a g-inverse are presented in Chapter 3.
In the example

4 7 14

X0 X = 7 15 30 ,
14 30 60
and a g-inverse is

15 7 0
1
4 0
7
.
11
0
0 0
3

(1 0 0) (X0 X) X0 X = (1 0 0). Therefore (1 0 0) is estimable.


(0 1 2) (X0 X) X0 X = (0 1 2). Therefore (0 1 2) is estimable.
(0 1 0) (X0 X) X0 X = (0 1 2). Therefore (0 1 0) is not estimable.
Related to this fourth method any linear function of
(X0 X) X0 X
is estimable.
If rank (X) = p = the number of columns in X, any linear function of is estimable.
In that case the only g-inverse of X0 X is (X0 X)1 , a regular inverse. Then by the fourth
method
k0 (X0 X) X0 X = k0 (X0 X)1 X0 X = k0 I = k0 .
Therefore, any k0 is estimable.
There is an extensive literature on generalized inverses. See for example, Searle
(1971b, 1982), Rao and Mitra (1971) and Harville(1999??).

Chapter 3
Best Linear Unbiased Estimation
C. R. Henderson
1984 - Guelph

In Chapter 2 we discussed linear unbiased estimation of k0 , having determined


that it is estimable. Let the estimate be a0 y, and if k0 is estimable, some a exists such
that
E(a0 y) = k0 .
Assuming that more than one a gives an unbiased estimator, which one should be chosen?
The most common criterion for choice is minimum sampling variance. Such an estimator
is called the best linear unbiased estimator (BLUE).
Thus we find a0 such that E(a0 y) = k0 and, in the class of such estimators, has
minimum sampling variance. Now
V ar(a0 y) = a0 (V ar(y))a = a0 Va,
where V ar(y) = V, assumed known, for the moment.
For unbiasedness we require a0 X = k0 . Consequently we find a that minimizes a0 Va
subject to a0 X = k0 . Using a Lagrange multiplier, , and applying differential calculus
we need to solve for a in equations
V X
X0 0

0
k

This is a consistent set of equations if and only if k0 is estimable. In that case the
unique solution to a is
V1 X(X0 V1 X) k.
A solution to is
(X0 V1 X) k,
and this is not unique when X and consequently X0 V1 X is not full rank. Nevertheless
the solution to a is invariant to the choice of a g-inverse of X0 V1 X. Thus, BLUE of k0
is
k0 (X0 V1 X) X0 V1 y.
But let
o = (X0 V1 X) X0 V1 y,
1

where o is any solution to


(X0 V1 X) o = X0 V1 y
known as generalized least squares (GLS) equations, Aitken (1935). Superscript 0 is used
to denote some solution, not a unique solution. Therefore BLUE of k0 is k0 o .
Let us illustrate with

X =

1
1
1
1

1
2
1
3

2
4
2
6

and y0 = (5 2 4 3). Suppose V ar(y) = Ie2 . Then the GLS equations are

4 7 14

2
e 7 15 30
14 30 60

14

o
= 22 e2 .
44

A solution is
( o )0 = (56 10 0)/11.
Then BLUE of (0 1 2), which has been shown to be estimable, is
(0 1 2)(56 10 0)0 /11 = 10/11.
Another solution to o is
(56 0 5)0 /11.
Then BLUE of (0 1 2) is 10/11, the same as the other solution to o .

Mixed Model Method For BLUE

One frequent difficulty with GLS equations, particularly in the mixed model, is that
V = ZGZ0 + R is large and non-diagonal. Consequently V1 is difficult or impossible to
compute by usual methods. It was proved by Henderson et al. (1959) that
V1 = R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 .
Now if R1 is easier to compute than V1 , as is often true, if G1 is easy to compute, and (Z0 R1 Z + G1 )1 is easy to compute, this way of computing V1 may have
important advantages. Note that this result can be obtained by writing equations, known
as Hendersons mixed model equations (1950) as follows,
X0 R1 X X0 R1 Z
Z0 R1 X Z0 R1 Z + G1

o
u

X0 R1 y
Z0 R1 y

Note that if we solve for u


in the second equation and substitute this in the first we
get
X0 [R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ]X o
= X0 [R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ]y,
or from the result for V1
X0 V1 X o = X0 V1 y.
Thus, a solution to o in the mixed model equations is a GLS solution. An interpretation of u
is given in Chapter 5. The mixed model equations are often well suited to an
iterative solution. Let us illustrate the mixed model method for BLUE with

X=

1
1
1
1

1
2
1
3

Z=

1
1
1
0

0
0
0
1

.1 0
0 .1

G=

and
R = I, y0 = [5 4 3 2].
Then the mixed model equations are

4 7 3 1
7 15 4 3
3 4 13 0
1 3 0 11

o
u

14
22
12
2

The solution is [286 50 2 2]/57. In this case the solution is unique because X has
full column rank.
Now consider a GLS solution.

V = [ZGZ0 + R] =

V1 =

143

1.1 .1 .1
0
.1 1.1 .1
0
.1 .1 1.1
0
0
0
0 1.1

132 11 11
0
11 132 11
0
11 11 132
0
0
0
0 130

Then X0 V1 X o = X0 V1 y becomes
1
143

460 830
830 1852

1
=
143
o

1580
2540

The solution is (286 50)/57 as in the mixed model equations.


3

Variance of BLUE

Once having an estimate of k0 we should like to know its sampling variance. Consider a
set of estimators, K0 o .

V ar(K0 o ) = V ar[K0 (X0 V1 X) X0 V1 y]


= K0 (X0 V1 X) X0 V1 VV1 X(X0 V1 X) K
= K0 (X0 V1 X) K provided K0 is estimable.
The variance is invariant to the choice of a g-inverse provided K0 is estimable. We
can also obtain this result from a g-inverse of the coefficient matrix of the mixed model
equations. Let a g-inverse of this matrix be
C11 C12
C21 C22

Then
V ar(K0 o ) = K0 C11 K.
This result can be proved by noting that
C11 = (X0 [R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ]X)
= (X0 V1 X) .
Using the mixed model example, let
1 0
0 1

K =

A g-inverse (regular inverse) of the coefficient matrix is


926 415 86
29
415
230
25 25

.
86
25
56
1
29 25
1
56

570
Then

1
V ar(K ) =
570
0

926 415
415
230

The same result can be obtained from the inverse of the GLS coefficient matrix
because
! !1
!
1
460 830
926 415
1
143
=
.
830 1852
230
570 415
4

Generalized Inverses and Mixed Model Equations

Earlier in this chapter we found that BLUE of K0 , estimable, is K0 o , where o is any


solution to either GLS or mixed model equations. Also the sampling variance requires a ginverse of the coefficient matrix of either of these sets of equations. We define (X0 V1 X)
as a g-inverse of X0 V1 X. There are various types of generalized inverses, but the one we
shall use is defined as follows.
A is a g-inverse of A provided that
A A A = A.
Then if we have a set of consistent equations,
A p = z,
a solution to p is
A z.
We shall be concerned, in this chapter, only with g-inverses of singular, symmetric
matrices characteristic of GLS and mixed model equations.

3.1

First type of g-inverse

Let W be a symmetric matrix with order, s, and rank, t < s. Partition W with possible
re-ordering of rows (and the same re-ordering of columns) as
W=

W11 W12
0
W12 W22

where W11 is a non-singular matrix with order t. Then W

1
W11
0
0
0

It is of interest that for this type of W it is true that W W W = W as well as


W W W = W. This is called a reflexive g-inverse. To illustrate, suppose W is a GLS
coefficient matrix,

4 7 8 15
7 15 17 32

W=

.
8 17 22 39
15 32 39 71
This matrix has rank 3 and the upper 3 3 is non-singular with inverse

41 18 1

24 12
301 18
.
1 12
11
5

Therefore a g-inverse is

301

41 18 1 0
18
24 12 0
1 12
11 0
0
0
0 0

Another g-inverse of this type is

301

41 17 0 1
17
59 0 23

.
0
0 0
0
1 23 0
11

This was obtained by inverting the full rank submatrix composed of rows (and columns)
1, 2, 4 of W. This type of g-inverse is described in Searle (1971b).
In the mixed model equations a comparable g-inverse is obtained as follows. Partition
X R1 X with possible re-ordering of rows (and columns) as
0

X1 R1 X1 X1 R1 X2
0
0
X2 R1 X1 X2 R1 X2

so that X1 R1 X1 has order r and is full rank. Compute


0

X1 R1 X1 X1 R1 Z
Z0 R1 X1 Z0 R1 Z + G1

!1

C00 C02
0
C02 C22

C00 0 C02

Then a g-inverse of the coefficient matrix is 0 0 0 . We illustrate with a


0
C02 0 C22
mixed model coefficient matrix as follows.

5
8 8
3
2

8
16 16
4
4

8 16
16 4 4

3
4 4
8
0

2
4 4
0
7

where X has 3 columns and Z has 2. Therefore X0 R1 X is the upper 3 x 3 submatrix.


It has rank 2 because the 3rd column is the negative of the second. Consequently find a
g-inverse by inverting the matrix with the 3rd row and column deleted. This gives

1
560

656 300 0 96 16

300
185 0
20 20

0
0 0
0
0
.
96
20 0
96
16

16 20 0
16
96
6

With this type of g-inverse the solution to o is ( o1 0)0 , where o1 has r elements. Only
the first p rows of the mixed model equations contribute to lack of rank of the mixed
model matrix. The matrix has order p + q and rank r + q, where r = rank of X, p =
columns in X, and q = columns in Z.

3.2

Second type of g-inverse

A second type of g-inverse is one which imposes restrictions on the solution to o . Let
M0 be a set of p r linearly independent, non-estimable functions
of . Then a!g-inverse
!1
0 1
C11 C12
XV X M
.
for the GLS matrix is obtained as follows
=
0
0
M
O
C12 C22
C11 is a reflexive g-inverse of X0 V1 X. This type of solution is described in Kempthorne
(1952). Let us illustrate GLS equations as follows.

11
5
6
3
8

5
5
0
2
3

6
0
6
1
5

3
2
1
3
0

8
3
5
0
8

o =

12
7
5
8
4

This matrix has order 5 but rank only 3. Two independent non-estimable functions are
needed. Among others the following qualify
0 1 1 0 0
0 0 0 1 1

5
5
0
2
3
1
0

0
0
0
1
1
0
0

Therefore we invert

11
5
6
3
8
0
0

6
0
6
1
5
1
0

3
2
1
3
0
0
1

8
3
5
0
8
0
1

0
1
1
0
0
0
0

which is

2441

28 1
1
13 13 122 122

1
24 24 7
7
122
0

1 24
24
7 7
122
0

13 7
7
30 30
0
122
.

13
7 7 30
30
0
122

122 122 122


0
0
0
0

122
0
0 122 122
0
0
7

The upper 5 x 5 submatrix is a g-inverse. This gives a solution


o = (386 8 8 262 262)0 /244.
A corresponding g-inverse for the mixed model is as follows
1

X0 R1 X X0 R1 Z
M
0 1
0 1
1
0
ZR X ZR Z+G

0
M
0
0

Then
C11 C12
0
C12 C22

C11 C12 C13


0

= C12 C22 C23 .


0
C13 C23 C33

is a g-inverse of the mixed model coefficient matrix. The property of o coming from this
type of g-inverse is
M0 o = 0.

3.3

Third type of g-inverse

A third type of g-inverse uses M of the previous section as follows.


(X V 1 X +
0 1
0 1
0 1
MM ) = C. Then C is a g-inverse of X V X. In this case C(X V X)C 6= C. This
is described in Rao and Mitra (1971).
We illustrate with the same GLS matrix as before and
0

M =

0 1 1 0 0
0 0 0 1 1

as before.

(X0 V1 X + MM0 ) =

with inverse

1
244

11
5
6
3
8

5
6
1
2
3

6
1
7
1
5

3
2
1
4
1

8
3
5
1
9

150 62 60 48 74

62
85
37 7
7

60
37
85
7 7
,
48 7
7
91
31

74
7 7
31
91

which is a g-inverse of the GLS matrix. The resulting solution to o is the same as the
previous section.

The corresponding method for finding


a g-inverse of the mixed model matrix is
!1
0
0 1
0 1
X R X + MM X R Z
= C. Then C is a g-inverse. The property
Z0 R1 X
Z0 R1 Z + G1
of the solution to o is
M0 o = 0.

Reparameterization

An entirely different method for dealing with the not full rank X problem is reparameter be
ization. Let K0 be a set of r linearly independent, estimable functions of . Let
solve (KK)1 K0 X0 V1 XK(K0 K)1
= (K0 K)1 K0 X0 V1 y.
BLUE of K0 . To find
has a unique solution, and the regular inverse of the coefficient matrix is V ar().
This

corresponds to a model
E(y) = X K(K0 K)1 .
This method was suggested to me by Gianola (1980).
From the immediately preceding example we need 3 estimable functions. An independent set is

1 1/2 1/2 1/2 1/2

1 1
0
0
0
.
0
0
0
1 1
The corresponding GLS equations are

12
11 .50 2.50

=
.75
1 .
.5 2.75

2
2.5
.75
2.75
The solution is
0 = (193 8 262)/122.

This is identical to

1 1/2 1/2 1/2 1/2

o
1 1
0
0
0

0
0
0
1 1
from the previous solution in which
0 1 1 0 0
0 0 0 1 1
was forced to equal 0.

The corresponding set of equations for mixed models is


(K0 K)1 K0 X0 R1 XK(K0 K)1 (K0 K)1 K0 X0 R1 Z
Z0 R1 XK(K0 K)1
Z0 R1 Z + G1

(K0 K)1 K0 X0 R1 y
Z0 R1 y

Precautions in Solving Equations

Precautions must be observed in the solution to equations, especially if there is some


doubt about the rank of the matrix. If a supposed g-inverse is calculated, it may be
advisable to check that AA A = A. Another check is to regenerate the right hand sides
as follows. Let the equations be
= r.
C
compute C
and check that it is equal, except for rounding error,
Having computed ,
to r.

10

Chapter 4
Test of Hypotheses
C. R. Henderson
1984 - Guelph

Much of the statistical literature for many years dealt primarily with tests of hypotheses ( or tests of significance). More recently increased emphasis has been placed, properly
I think, on estimation and prediction. Nevertheless, many research workers and certainly
most editors of scientific journals insist on tests of significance. Most tests involving linear
models can be stated as follows. We wish to test the null hypothesis,

H0 = c 0 ,
against some alternative hypothesis, most commonly the alternative that can have any
value in the parameter space. Another possibility is the general alternative hypothesis,

Ha = c a .
In both of these hypotheses there may be elements of that are not determined
0
by H. These elements are assumed to have any values in the parameter space. H0 and
0
Ha are assumed to have full row rank with m and a rows respectively. Also r m > a.
Under the unrestricted hypothesis a = 0.
0

Two important restrictions are required logically for H0 and Ha . First, both H0
0
and Ha must be estimable. It hardly seems logical that we could test hypotheses about
functions of unless we can estimate these functions. Second, the null hypothesis must
be contained in the alternative hypothesis. That is, if the null is true, the alternative
0
0
must be true. For this to be so we require that Ha can be written as MH0 and ca as Mc0
for some M.

Equivalent Hypotheses

It should be recognized that there are an infinity of hypotheses that are equivalent to
0
0
H0 = c. Let P be an m m, non-singular matrix. Then PH0 = Pc is equivalent to
1

H0 = c. For example, consider a fixed model


yij = + ti + eij ,

i = 1, 2, 3.

A null hypothesis often tested is


1 0 1
0 1 1

t = 0.

An equivalent hypothesis is
2/3 1/3 1/3
1/3
2/3 1/3

t = 0.

To convert the first to the second pre-multiply


1 0 1
0 1 1

2/3 1/3
1/3
2/3

by

As an example of use of Ha consider a type of analysis sometimes recommended for


a two way fixed model without interaction. Let the model be yijk = + ai + bj + eijk ,
where i = 1, 2, 3 and j = 1, 2, 3, 4. The lines of the ANOVA table could be as
follows.
Sum of Squares
Rows ignoring columns (column differences regarded as non-existent),
Columns with rows accounted for,
Residual.
The sum of these 3 sums of squares is equal to (y0 y correction factor). The first
sum of squares is represented as testing the null hypothesis:

0
0
0
0
0

1
0
0
0
0

0 1 0 0 0
0
1 1 0 0 0
0
0
0 1 0 0 1
0
0 0 1 0 1
0
0 0 0 1 1

= 0.

and the alternative hypothesis:

0 0 0 0 1 0 0 1

0 0 0 0 0 1 0 1 = 0.
0 0 0 0 0 0 1 1
The second sum of squares represents testing the null hypothesis:

0 0 0 0 1 0 0 1

0 0 0 0 0 1 0 1 = 0.
0 0 0 0 0 0 1 1
and the alternative hypothesis: entire parameter space.
2

2
2.1

Test Criteria
Differences between residuals

Now it is assumed for purposes of testing hypotheses that y has a multivariate normal
distribution. Then it can be proved by the likelihood ratio method of testing hypotheses,
Neyman and Pearson (1933), that under the null hypothesis the following quantity is
distributed as 2 .
(y X 0 )0 V1 (y X 0 ) (y X a )0 V1 (y X a ).

(1)

0 is a solution to GLS equations subject to the restriction H0 0 = c0 . 0 can be


found by solving
!
!
!
X0 V1 X H0
X0 V1 y
0
=
0
0
c0
H0
0
or by solving the comparable mixed model equations
X0 R1 X
X0 R1 Z
H0
X0 R1 y
0
0 1

0 1
0 1
1
0 u0 = Z R y .
ZR X ZR Z+G
0
c0
0
H0
0
0

a is a solution to GLS or mixed model equations with restrictions, Ha a = ca


0
rather than H0 0 = c0 .
In case the alternative hypothesis is unrestricted ( can have any values), that
is, a is a solution to the unrestricted GLS or mixed model equations. Under the null
hypothesis (1) is distributed as 2 with (m a) degrees of freedom, m being the number
0
0
of rows (independent) in H0 , and a being the number of rows (independent) in Ha . If the
alternative hypothesis is unrestricted, a = 0. Having computed (1) this value is compared
with values of 2ma for the chosen level of significance.
Let us illustrate with a model

y = + ti + eij
, ti fixed, i = 1, 2, 3
R = V ar(e) = 5I.
Suppose that the number of observations on the levels of ti are 4, 3, 2, and the
treatment totals are 25, 15, 9 with individual observations, (6, 7, 8, 4, 4, 5, 6, 5, 4). We
wish to test that the levels of ti are equal, which can be expressed as
0 1 0 1
0 0 1 1

( t1 t2 t3 )0 = (0 0)0 .
3

We use as the alternative


under the restriction are

9 4

4 4

3 0
.2
2 0

0 1
0 0
A solution is

hypothesis the unrestricted hypothesis. The GLS equations

3
2
0
0
0
0
1
0

3
0
0
1

0
2 1 1

0 1
0
0
1 1
0
0

0
0

= .2

49
25
15
9
0
0

o = (49 0 0 0)/9, o = (29 12)/9.


The GLS equations with no restrictions are

.2

9
4
3
2

4
4
0
0

3
0
3
0

2
0
0
2

= .2

49
25
15
9

A solution is a = (0 25 20 18)/4.
(y X o )0
(y X o )0 V1 (y X o )
(y X a )0
(y X a )0 V1 (y X a )
The difference is

2.2

146
45

9
4

=
=
=
=
=

(5 14 23 13 13 4 5 4 13)/9.
146/45.
[1, 3, 7, 9, 4, 0, 4, 2, 2]/4.
9/4.
179
.
180

Differences between reductions

Two easier methods of computation that lead to the same result will now be presented.
The first, described in Searle (1971b), is
0

a X0 V1 y + a ca o X0 V1 y o co .

(2)

The first 2 terms are called reduction in sums of squares under the alternative hypothesis.
The last two terms are the negative of the reduction in sum of squares under the null
hypothesis. In our example
0

a X0 V1 y + a ca = 1087/20.
0
0
o X0 V1 y + o co = 2401/45.
1087
2401
179

=
as before.
20
45
180
4

If the mixed model equations are used, (2) can be computed as


0

a X0 R1 y + ua Z0 R1 y + a ca o X0 R1 y uo Z0 R1 y o co .

2.3

(3)

Method based on variances of linear functions

A second easier method is

(Ho o co )0 [Ho (X0 V1 X) Ho ]1 (Ho o co )


0
0
0
(Ha o ca )0 [Ha (X0 V1 X) Ha ]1 (Ha o ca ).

(4)

If Ha is unrestricted the second term of (4) is set to 0. Remember that o is a solution in the unrestricted GLS equations. In place of (X0 V1 X) one can substitute the
corresponding submatrix of a g-inverse of the mixed model coefficient matrix.
This is a convenient point to prove that an equivalent hypothesis, P(H0 c) = 0
gives the same result as H0 c, remembering that P is non-singular. The quantity
corresponding to (4) for P (H0 c) is
(H0 o c)0 P0 [PH0 (X0 V1 X) HP0 ]1 P(H0 c)
= (H0 o c)0 P0 (P0 )1 [H0 (X0 V1 X) H]1 P1 P(H0 o c)
= (H0 o c)0 [H0 (X0 V1 X)H]1 (H0 o c),
which proves the equality of the two equivalent hypotheses.
Let us illustrate (3) with our example
0 1 0 1
0 0 1 1

0 1 0 1
0 0 1 1

!
0

(0 25 20 18) /4 =

A g-inverse of X0 V1 X is

0 0 0 0
0 15 0 0
0 0 20 0
0 0 0 30

H0 (X V X) H0 =

/12.

45 30
30 50

/12.

7
2

/4.

The inverse of this is


20 12
12
18
Then
1
(7 2)
4

20 12
12
18

1
45

/45.
7
2

1
179
=
as before.
4
180

The d.f. for 2 are 2 because H0 has 2 rows and the alternative hypothesis is unrestricted.

2.4

Comparison of reductions under reduced models

Another commonly used method is to compare reductions in sums of squares resulting


from deletions of different subvectors of from the reduction. The difficulty with this
method is the determination of what hypothesis is tested by the difference between a pair
of reductions. It is not true in general, as sometimes thought, that Red() Red( 1 )
0
0
tests the hypothesis that 2 = 0, where 0 = ( 1 2 ). In most designs, 2 is not
estimable. We need to determine what H0 imposed on a solution will give the same
reduction in sum of squares as does Red( 1 ).
In the latter case we solve
0

(X1 V1 X1 ) o1 = X1 V1 y
and then

Reduction = ( o1 )0 X1 V1 y.

(5)

Consider a hypothesis, H0 2 = 0. We could solve


0

X1 V1 X1 X1 V1 X2 0
o1
X1 V1 y
0
0
0

o
X2 V1 X1 X2 V1 X2 H 2 = X2 V1 y .

0
H0
0
0

Then

Reduction = ( o1 )0 X1 V1 y + ( o2 )0 X2 V1 y.

(6)

(7)

Clearly (7) is equal to (5) if a solution to (6) is o2 = 0, for then


0

o1 = (X1 V1 X1 ) X1 V1 y.
Consequently in order to determine what hypothesis is implied when 2 is deleted
from the model, we need to find some H 0 2 = 0 such that a solution to (6) is o2 = 0.
We illustrate with a two way fixed model with interaction. The numbers of observations per subclass are
!
3 2 1
.
1 2 5
6

The subclass totals are


6 2 2
3 5 9

An analysis sometimes suggested is


Red(, r, c) Red(, c) to test rows.
Red(f ull model) Red(, r, c) to test interaction.
The least squares equations are

14 6 8 4

6 0 3

8 1

4
2
2
0
4

6
1
5
0
0
6

3
3
0
3
0
0
3

2
2
0
0
2
0
0
2

1
1
0
0
0
1
0
0
1

1
0
1
1
0
0
0
0
0
1

2
0
2
0
2
0
0
0
0
0
2

5
0
5
0
0
5
0
0
0
0
0
5

o =

27
10
17
9
7
11
6
2
2
3
5
9

A solution to these equations is


[0, 0, 0, 0, 0, 0, 2, 1, 2, 3, 2.5, 1.8],
which gives a reduction of 55.7, the full model reduction. A solution when interaction
terms are deleted is
[1.9677, .8065, 0, .8871, .1855, 0]
giving a reduction of 54.3468. This corresponds to an hypothesis,
1 0 1 1
0 1
0 1 1
0 1 1

rc = 0.

When this is included as a Lagrange multiplier as in (6), a solution is


[1.9677, .8065, 0, .8871, .1855, 0, 0, 0, 0, 0, 0, 0, .1452, .6935].
Note that (rc)o = 0, proving that dropping rc corresponds to the hypothesis stated
above. The reduction again is 54.3468.
When r and rc are dropped from the equations, a solution is
[0, 2.25, 1.75, 1.8333]
7

giving a reduction of 52.6667. This corresponds to an hypothesis

3 3 1 1
1 1 1 1

0
0
1
0
1
1
0
1

0
0 0 1 1
0 1
1

r
rc

= 0.

When this is added as a Lagrange multiplier, a solution is


[2.25, 0, 0, 0, .5, .4167, 0, 0, 0, 0, 0, 0, .6944, .05556, .8056].
Note that ro and rco are null, verifying the hypothesis. The reduction again is 52.6667.
Then the tests are as follows:
Rows assuming rc non-existent = 54.3468 - 52.6667.
Interaction = 55.7 - 54.3468.

Chapter 5
Prediction of Random Variables
C. R. Henderson
1984 - Guelph

We have discussed estimation of , regarded as fixed. Now we shall consider a rather


different problem, prediction of random variables, and especially prediction of u. We
can also formulate this problem as estimation of the realized values of random variables.
These realized values are fixed, but they are the realization of values from some known
population. This knowledge enables better estimates (smaller mean squared errors) to
be obtained than if we ignore this information and estimate u by GLS. In genetics the
predictors of u are used as selection criteria. Some basic results concerning selection are
now presented.
Which is the more logical concept, prediction of a random variable or estimation of
the realized value of a random variable? If we have an animal already born, it seems
reasonable to describe the evaluation of its breeding value as an estimation problem. On
the other hand, if we are interested in evaluating the potential breeding value of a mating
between two potential parents, this would be a problem in prediction. If we are interested
in future records, the problem is clearly one of prediction.

Best Prediction

Let w = f (y) be a predictor of the random variable w. Find f (y) such that E(w w)2
is minimum. Cochran (1951) proved that
f (y) = E(w | y).

(1)

This requires knowing the joint distribution of w and y, being able to derive the conditional mean, and knowing the values of parameters appearing in the conditional mean.
All of these requirements are seldom possible in practice.
Cochran also proved in his 1951 paper the following important result concerning selection. Let p individuals regarded as a random sample from some population as candidates
for selection. The realized values of these individuals are w1 , . . . wp , not observable. We
can observe yi , a vector of records on each. (wi , yi ) are jointly distributed as f (w, y) independent of (wj , yj ). Some function, say f (yi ), is to be used as a selection criterion and the
fraction, , with highest f (yi ) is to be selected. What f will maximize the expectation
1

of the mean of the associated wi ? Cochran proved that E(w | y) accomplishes this goal.
This is a very important result, but note that seldom if ever do the requirements of this
theorem hold in animal breeding. Two obvious deficiencies suggest themselves. First, the
candidates for selection have differing amounts of information (number of elements in y
differ). Second, candidates are related and consequently the yi are not independent and
neither are the wi .
Properties of best predictor
1.
2.
3.

E(wi ) = E(wi ).
V ar(wi wi ) = V ar(w | y)
averaged over the distribution of y.
Maximizes rww
for all functions of y.

(2)
(3)
(4)

Best Linear Prediction

Because we seldom know the form of distribution of (y, w), consider a linear predictor
that minimizes the squared prediction error. Find w
= a0 y + b, where a0 is a vector and
b a scalar such that E(w w)2 is minimum. Note that in contrast to BP the form of
distribution of (y, w) is not required. We shall see that the first and second moments are
needed.
Let
E(w)
E(y)
Cov(y, w)
V ar(y)

=
=
=
=

,
,
c, and
V.

Then
E(a0 y + b w)2 = a0 Va 2a0 c + a0 0 a + b2
+ 2a0 b 2a0 2b + V ar(w) + 2 .
Differentiating this with respect to a and b and equating to 0
V + 0
0
1

a
b

c +

The solution is
a = V1 c, b = 0 V1 c.
2

(5)

Thus
w = + c0 V1 (y ).
Note that this is E(w | y) when y, w are jointly normally distributed. Note also that BLP
is the selection index of genetics. Sewall Wright (1931) and J.L. Lush (1931) were using
this selection criterion prior to the invention of selection index by Fairfield Smith (1936).
I think they were invoking the conditional mean under normality, but they were not too
clear in this regard.
Other properties of BLP are unbiased, that is
E(w)
= E(w).

(6)

E(w)
= E[ + c0 V1 (y )]
= + c0 V1 ( )
= = E(w).
V ar(w)
= V ar(c0 V1 y) = c0 V1 V V1 c = c0 V1 c.

(7)

Cov(w,
w) = c0 V1 Cov(y, w) = c0 V1 c = V ar(w)

(8)

V ar(w w) = V ar(w) V ar(w)

(9)

In the class of linear functions of y, BLP maximizes the correlation,


0
0
.5
rww
= a c/[a Va V ar(w)] .

(10)

Maximize log r.
log r = log a0 c .5 log [a0 Va] .5 log V ar(w).
Differentiating with respect to a and equating to 0.
c
V ar(w)

Va
= 0 or Va = c
.
0
a Va
ac
Cov(w,
w)
The ratio on the right does not affect r. Consequently let it be one. Then a = V1 c.
Also the constant, b, does not affect the correlation. Consequently, BLP maximizes r.
where w
is BLP of w. Now w is a vector with E(w) = and
BLP of m0 w is m0 w,
Cov(y, w0 ) = C. Substitute the scalar, m0 w for w in the statement for BLP. Then BLP
of
m0 w = m0 + m0 C0 V1 (y )
= m0 [ + CV1 (y )]

= m0 w
because
= + C0 V1 (y ).
w
3

(11)

In the multivariate normal case, BLP maximizes the probability of selecting the better
of two candidates for selection, Henderson (1963). For fixed number selected, it maximizes
the expectation of the mean of the selected ui , Bulmer (1980).
It should be noted that when the distribution of (y, w) is multivariate normal, BLP
is the mean of w given y, that is, the conditional mean, and consequently is BP with its
desirable properties as a selection criterion. Unfortunately, however, we probably never
know the mean of y, which is X in our mixed model. We may, however, know V
accurately enough to assume that our estimate is the parameter value. This leads to the
derivation of best linear unbiased prediction (BLUP).

Best Linear Unbiased Prediction

Suppose the predictand is the random variable, w, and all we know about it is that it has
mean k0 , variance = v, and its covariance with y0 is c0 . How should we predict w? One
possibility is to find some linear function of y that has expectation, k0 (is unbiased), and
in the class of such predictors has minimum variance of prediction errors. This method
is called best linear unbiased prediction (BLUP).
Let the predictor be a0 y. The expectation of a0 y = a0 X, and we want to choose a
so that the expectation of a0 y is k0 . In order for this to be true for any value of , it is
seen that a0 must be chosen so that
a0 X = k0 .

(12)

Now the variance of the prediction error is


V ar(a0 y w) = a0 Va 2a0 c + v.

(13)

Consequently, we minimize (13) subject to the condition of (12). The equations to be


solved to accomplish this are
V X
X0 0

c
k

(14)

Note the similarity to (1) in Chapter 3, the equations for finding BLUE of k0 .
Solving for a in the first equation of (14),
a = V1 X + V1 c.
Substituting this value of a in the second equation of (14)
X0 V1 X = k + X0 V1 c.
4

(15)

Then, if the equations are consistent, and this will be true if and only if k0 is estimable,
a solution to is
= (X0 V1 X) k + (X0 V1 X) X0 V1 c.
Substituting the solution to in (15) we find
a = V1 X(X0 V1 X) k V1 X(X0 V1 X) X0 V1 c + V1 c.

(16)

Then the predictor is


a0 y = k0 (X0 V1 X) X0 V1 y + c0 V1 [y X(X0 V1 X) X0 V1 y].

(17)

But because (X0 V1 X) X0 V1 y = o , a solution to GLS equations, the predictor can


be written as
k0 o + c0 V1 (y X o ).

(18)

This result was described by Henderson (1963) and a similar result by Goldberger (1962).
Note that if k0 = 0 and if is known, the predictor would be c0 V1 (y X).
This is the usual selection index method for predicting w. Thus BLUP is BLP with o
substituted for .

4
4.1

Alternative Derivations Of BLUP


Translation invariance

We want to predict m0 w in the situation with unknown . But BLP, the minimum MSE
predictor in the class of linear functions of y, involves . Is there a comparable predictor
that is invariant to ?
Let the predictor be
a0 y + b,
invariant to the value of . For translation invariance we require
a0 y + b = a0 (y + Xk) + b
for any value of k. This will be true if and only if a0 X = 0. We minimize
E(a0 y + b m0 w)2 = a0 Va 2a0 Cm + b2 + m0 Gm
when a0 X = 0 and where G = V ar(w). Clearly b must equal 0 because b2 is positive. Minimization of a0 Va 2a0 Cm subject to a0 X = 0 leads immediately to predictor
m0 C0 V1 (y X o ), the BLUP predictor. Under normality BLUP has, in the class of
invariant predictors, the same properties as those stated for BLP.
5

4.2

Selection index using functions of y with zero means

An interesting way to compute BLUP of w is the following. Compute = L0 y such that


E(X ) = X.
Then compute
y = y X
= (I XL0 )y T0 y.
Now
V ar(y ) = T0 VT V ,

(19)

Cov(y , w0 ) = T0 C C ,

(20)

and

where C = Cov(y, w0 ). Then selection index is


0

= C V y .
w
0
= Cov(w,
w0 ) = C V C .
V ar(w)
w) = V ar(w) V ar(w).

V ar(w

(21)
(22)
(23)

is invariant to choice of T and to the g-inverse of V that is computed. V has


Now w
rank = n r. One choice of is OLS = (X0 X) X0 y. In that case T = I X(X0 X) X0 .
could also be computed as OLS of an appropriate subset of y, with no fewer than r
elements of y.
Under normality,
= E(w | y ), and
w
w) = V ar(w | y ).
V ar(w

(24)
(25)

Variance Of Prediction Errors

We now state some useful variances and covariances. Let a vector of predictands be w.
Let the variance-covariance matrix of the vector be G and its covariance with y be C0 .
Then the predictor of w is
= K0 o + C0 V1 (y X o ).
w
w0 ) = K0 (X0 V1 X) X0 V1 C + C0 V1 C
Cov(w,
C0 V1 X(X0 V1 X) X0 V1 C.
6

(26)
(27)

= K0 (X0 V1 X) K + C0 V1 C
V ar(w)
C0 V1 X(X0 V1 X) X0 V1 C.

(28)

w) = V ar(w) Cov(w,
w0 ) Cov(w, w
0 ) + V ar(w)

V ar(w
0
0 1

0
0 1
0 1
= K (X V X) K K (X V X) X V C
C0 V1 X(X0 V1 X) K + G C0 V1 C
+ C0 V1 X(X0 V1 X) X0 V1 C.

(29)

Mixed Model Methods

The mixed model equations, (4) of Chapter 3, often provide an easy method to compute
BLUP. Suppose the predictand, w, can be written as
w = K0 + u,

(30)

where u are the variables of the mixed model. Then it can be proved that
,
BLUP of w = BLUP of K0 + u = K0 o + u

(31)

are solutions to the mixed model equations. From the second equation
where and u
of the mixed model equations,
= (Z0 R1 Z + G1 )1 Z0 R1 (y X o ).
u
But it can be proved that
(Z0 R1 Z + G1 )1 Z0 R1 = C0 V1 ,
where C = ZG, and V = ZGZ0 + R. Also o is a GLS solution. Consequently,
.
K0 o + C0 V1 (y X o ) = K0 o + u
From (24) it can be seen that
.
BLUP of u = u

(32)

Proof that (Z0 R1 Z + G1 )1 Z0 R1 = C0 V1 follows.


GZ0 V1
GZ0 [R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ]
G[Z0 R1 Z0 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ]
G[Z0 R1 (Z0 R1 Z + G1 )(Z0 R1 Z + G1 )1 Z0 R1
+ G1 (Z0 R1 Z + G1 )1 Z0 R1 ]
= G[Z0 R1 Z0 R1 + G1 (Z0 R1 Z + G1 )1 Z0 R1 ]
= (Z0 R1 Z + G1 )1 Z0 R1 .

C0 V1 =
=
=
=

This result was presented by Henderson (1963). The mixed model method of estimation and prediction can be formulated as Bayesian estimation, Dempfle (1977). This is
discussed in Chapter 9.
7

Variances from Mixed Model Equations

A g-inverse of the coefficient matrix of the mixed model equations can be used to find
needed variances and covariances. Let a g-inverse of the matrix of the mixed model
equations be
C11 C12
0
C12 C22

(33)

Then
V ar(K0 o )
0)
Cov(K0 o , u
Cov(K0 o , u0 )
0 u0 )
Cov(K0 o , u
V ar(
u)
Cov(
u, u0 )
V ar(
u u)
w)
V ar(w

=
=
=
=
=
=
=
=

K0 C11 K.
0.
K0 C12 .
K0 C12 .
G C22 .
G C22 .
C22 .
0
K0 C11 K + K0 C12 + C12 K + C22 .

(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)

These results were derived by Henderson (1975a).

Prediction Of Errors

The prediction of errors (estimation of the realized values) is simple. First, consider the
model y = X + and the prediction of the entire error vector, . From (18)
= C0 V1 (y X o ),

but since C0 = Cov(, y0 ) = V, the predictor is simply


= VV1 (y X o )

= y X o .

(42)

To predict n+1 , not in the model for y, we need to know its covariance with y.
Suppose this is c0 . Then
n+1 = c0 V1 (y X o )
.
= c0 V1

(43)

Next consider prediction of e from the mixed model. Now Cov(e, y0 ) = R. Then
= RV1 (y X o )
e
= R[R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1 ](y X o ),
from the result on V1 ,
= [I Z(Z0 R1 Z + G1 )1 Z0 R1 ](y X o )
= y X o Z(Z0 R1 Z + G1 )1 Z0 R1 (y X o )
= y X o Z
u.

(44)

To predict en+1 , not in the model for y, we need the covariance between it and e, say
c . Then the predictor is
0

.
en+1 = c0 R1 e
0

(45)

We now define e0 = [ep em ], where ep refers to errors attached to y and em to future


errors. Let
ep
em

Rpp Rpm
0
Rpm Rmm

(46)

Then
p = y X o Z
e
u,
and
0

p .
m = Rpm R1
e
pp e
Some prediction error variances and covariances follow.
V ar(
ep ep ) = WCW0 ,
where

C1
C2

W = [X Z], C =

where C is the inverse of mixed model coefficient matrix, and C1 , C2 have p,q rows
respectively. Additionally,
0

Cov[(
ep ep ), ( o )0 K] = WC1 K,
0

Cov[(
ep ep ), (
u u)0 ] = WC2 ,
Cov[(
ep ep ), (
em em )0 ] = WCW0 R1
pp Rpm ,
0

0 1
V ar(
em em ) = Rmm Rpm R1
pp WCW Rpp Rpm ,
0

Cov[(
em em ), ( o )0 K] = Rpm R1
pp WC1 K, and
Cov[(
em em ), (
u u)0 ] = Rpm R1
pp WC2 .
9

Prediction Of Missing u

Three simple methods exist for prediction of a u vector not in the model, say un .
n = B0 V1 (y X o )
u

(47)

where B0 is the covariance between un and y0 . Or


n = C0 G1 u
,
u

(48)

is BLUP of u. Or write expanded mixed


where C0 = Cov(un , u0 ), G = V ar(u), and u
model equations as follows:
X0 R1 X
X0 R1 Z
0
X0 R1 y
o
0 1

0 1

0 1
= Z R y ,
Z R X Z R Z + W11 W12 u
0
n
u
0
0
W12
W22

where
W11 W12
0
W12 W22

G C
C0 Gn

(49)

!1

and G = V ar(u), C = Cov(u, un ), Gn = V ar(un ). The solution to (49) gives the


same results as before when un is ignored. The proofs of these results are in Henderson
(1977a).

10

Prediction When G Is Singular

The possibility exists that G is singular. This could be true in an additive genetic model
with one or more pairs of identical twins. This poses no problem if one uses the method
= GZ0 V1 (y X 0 ), but the mixed model method previously described cannot be
u
used since G1 is required. A modification of the mixed model equations does permit a
. One possibility is to solve the following.
solution to o and u
X0 R1 X
X0 R1 Z
GZ0 R1 X GZ0 R1 Z + I

X0 R1 y
GZ0 R1 y

(50)

is BLUP
The coefficient matrix has rank, r + q. Then o is a GLS solution to , and u
of u. Note that the coefficient matrix above is not symmetric. Further, a g-inverse of
it does not yield sampling variances. For this we proceed as follows. Compute C, some
g-inverse of the matrix. Then
!
I 0
C
0 G
has the same properties as the g-inverse in (33).
10

If we want a symmetric coefficient matrix we can modify the equations of (50) as


follows.
X0 R1 X
X0 R1 ZG
0 1
GZ R X GZ0 R1 ZG + G

X0 R1 y
GZ0 R1 y

(51)

Then
This coefficient matrix has rank, r+ rank (G). Solve for o , .
= G.

u
Let C be a g-inverse of the matrix of (51). Then
I 0
0 G

I 0
0 G

has the properties of (33).


These results on singular G are due to Harville (1976). These two methods for singular G can also be used for nonsingular G if one wishes to avoid inverting G, Henderson
(1973).

11

Examples of Prediction Methods

Let us illustrate some of these prediction methods. Suppose


X0 =

1 1 1 1 1
1 2 1 3 4

1 1 0 0 0

0
, Z = 0 0 1 0 0 ,
0 0 0 1 1

3 2 1

0
4 1
G =
, R = 9I, y = (5, 3, 6, 7, 5).
5
By the basic GLS and BLUP methods

V = ZGZ0 + R =

12

3
12

2
2
13

1
1
1
5
14

1.280757
2.627792

1
1
1
14

Then the GLS equations, X0 V1 X o = X0 V1 y are


.249211 .523659
.523659 1.583100

o =
11

The inverse of the coefficient matrix is


13.1578 4.3522
4.3522
2.0712

and the solution to o is [5.4153 .1314]0 . To predict u,

y X o =

.2839
2.1525
.7161
1.9788
.1102

GZ0 V1

.1838 .1838 .0929 .0284 .0284

= .0929 .0929 .2747 .0284 .0284 ,


.0284 .0284 .0284 .2587 .2587

.3220

o
0 1
= GZ V (y X ) = .0297
u
,
.4915
0 u0 ) = (X0 V1 X) X0 V1 ZG
Cov( o , u
!
3.1377 3.5333
.4470
=
, and
.5053
.6936 1.3633
V ar(
u u) = G GZ0 V1 ZG + GZ0 V1 X(X0 V1 X) X0 V1 ZG

1.3456 1.1638 .7445


3 2 1

1.5274 .7445
4 1
=


2.6719
5

1.1973 1.2432
1.3063
+

.9182
.7943

2.3541

2.8517 2.0794 1.1737

3.7789 1.0498
=
.
4.6822
The mixed model method is considerably easier.
0

.5556 1.2222
1.2222 3.4444

XR X =
XR Z =

Z0 R1 Z =

.2222 .1111 .2222


.3333 .1111 .7778

.2222

0
.1111

12

0
0
,
.2222

2.8889
6.4444

X0 R1 y =

G1

.8889

0 1
, Z R y = .6667 ,
1.3333

.5135 .2432 .0541


.3784 .0270
=

.
.2162

Then the mixed model equations are

.5556 1.2222 .2222


.1111
.2222

3.4444
.3333
.1111
.7778

.7357 .2432 .0541

.4895 .0270

.4384

2.8889
6.4444
.8889
.6667
1.3333

A g-inverse (regular inverse) is

13.1578 4.3522 3.1377 3.5333


.4470

2.0712
.5053
.6936 1.3633

2.8517
2.0794
1.1737

3.7789
1.0498

4.6822
0 u0 ),
The upper 2 x 2 represents (X0 V1 X) , the upper 2 x 3 represents Cov( o , u
and the lower 3 x 3 V ar(
u u). These are the same results as before. The solution is
(5.4153, .1314, .3220, .0297, .4915) as before.
Now let us illustrate with singular G. Let the data be the same as before except

2 1 3

3 4
G=
.
7
Note that the 3rd row of G is the sum of the first 2 rows. Now

V =

and

11

2
11

V1

1
1
12

3
3
4
16

3
3
4
7
16

.0993 .0118 .0004 .0115

.0993 .0004 .0115

.0943 .0165
=

.0832

13

.0115
.0115
.0165
.0280
.0832

The GLS equations are


.2233

.3670
1.2409

(X V

X)

1.0803
1.7749

8.7155 2.5779
1.5684

4.8397
.0011

o =

.1065
.1065 .0032 .1032 .1032

=
.0032
.0032
.1516 .1484 .1484
u

.1033
.1033
.1484 .2516 .2516

.1614
1.8375
1.1614
2.1636
.1648

0582

= .5270 .
.5852

Note that u3 = u1 + u2 as a consequence of the linear dependencies in G.


o

.9491 .8081 1.7572


.5564 .7124 1.2688

u) =
Cov( , u

1.9309 1.0473 2.9782

2.5628 3.6100
V ar(
u u) =
.
6.5883
By the modified mixed model methods

1.2222 3.1111

0 1
GZ R X = 1.4444 3.7778 ,
2.6667 6.8889

.4444 .1111 .6667

0 1
GZ R Z = .2222 .3333 .8889 ,
.6667 .4444 1.5556

6.4444

0 1
GZ R y = 8.2222 , X0 R1 y =
14.6667

2.8889
6.4444

Then the non-symmetric mixed model equations (50) are

.5556
1.2222
1.2222
1.4444
2.6667

1.2222 .2222 .1111 .2222


3.4444 .3333 .1111 .7778
3.1111 1.4444 .1111 .6667
3.7778 .2222 1.3333 .8889
6.8889 .6667 .4444 2.5556
14

2.8889
6.4444
6.4444
8.2222
14.6667

The solution is (4.8397, .0011, .0582, .5270, .5852) as before. The inverse of the
coefficient matrix is

8.7155 2.5779 .8666 .5922


.4587

2.5779
1.5684
.1737
.1913 .3650

.9491 .5563
.9673
.0509 .0182
.
.8081 .7124
.1843
.8842 .0685

1.7572 1.2688
.1516 .0649
.9133

Post multiplying this matrix by

I 0
0 G

gives

8.7155 2.5779 .9491 .8081 1.7572

1.5684 .5563 .7124 1.2688

1.9309 1.0473
2.9782

2.5628
3.6101

6.5883
These yield the same variances and covariances as before. The analogous symmetric
equations (51) are

.5556 1.2222 1.2222 1.4444 2.6667

3.4444 3.1111 3.7778 6.8889

5.0
4.4444 9.4444

7.7778 12.2222

21.6667
A solution is [4.8397, .0011, .2697,
0 = (.0582, .5270, .5852) as before.
u

2.8889
6.4444
6.4444
8.2222
14.6667

by G we obtain
0, .1992]. Premultiplying

A g-inverse of the matrix is

8.7155 2.5779 .2744 0 .1334

1.5684 .0176 0 .1737

1.1530 0 .4632

0
0

.3197
!

I 0
Pre-and post-multiplying this matrix by
, yields the same matrix as post0 G
!
I 0
multiplying the non-symmetric inverse by
and consequently we have the
0 G
required matrix for variances and covariances.

15

12

Illustration Of Prediction Of Missing u

We illustrate prediction of random variables not in the model for y by a multiple trait
example. Suppose we have 2 traits and 2 animals, the first 2 with measurements on traits
1 and 2, but the third with a record only on trait 1. We assume an additive genetic model
and wish to predict breeding value of both traits on all 3 animals and also to predict the
second trait of animal 3. The numerator relationship matrix for the 3 animals is

1 1/2 1/2

1 1/4
1/2
.
1/2 1/4 1
The additive genetic variance-covariance
and
!
! error covariance matrices are assumed
2 2
4 1
to be G0 and R0 =
and
, respectively. The records are ordered
2 3
1 5
animals in traits and are [6, 8, 7, 9, 5]. Assume
1 1 1 0 0
0 0 0 1 1

X =

If all 6 elements of u are included

Z=

1
0
0
0
0

0
1
0
0
0

0
0
1
0
0

0
0
0
1
0

0
0
0
0
1

0
0
0
0
0

If the last (missing u6 ) is not included delete the last column from Z. When all u are
included
!
Ag11 Ag12
G=
,
Ag12 Ag22
where gij is the ij th element of G0 , the genetic variance-covariance matrix. Numerically
this is

2 1 1 2 1
1

2 .5 1 2 .5

2
1
.5
2

3 1.5 1.5

3 .75
3
If u6 is not included, delete the 6th row and column from G.

4 0 0 1

4 0 0

4 0
R=

16

0
1
0
0
5

R1 =

.2632

0 .0526
0

0
0
.0526

.
.25
0
0

.2105
0

.2105

0
.2632

G1 for the first 5 elements of u is

2.1667 1. .3333 1.3333 .6667

2.
0
.6667 1.3333

.6667
0
0

1.3333 .6667

1.3333
Then the mixed model equations for o and u1 , . . . , u5 are
.7763 .1053
.2632 .2632
.25
.0526 .0526
1


.4211 .0526 .0526
0
.2105
.2105

2.4298
1.
.3333 1.3860 .6667
1

2.2632
0
.6667 1.3860 u2

u
9167
0
0

1.5439 .6667
4

u
1.5439
u5

= (4.70, 2.21, 1.11, 1.84, 1.75, 1.58, .63)0 .


The solution is (6.9909, 6.9959, .0545, -.0495, .0223, .2651, -.2601).
To predict u6 we can use u1 , . . . , u5 . The solution is

u6

2 1 1 2 1

2 .5 1 2

2 1 .5
= [1 .5 2 1.5 .75]

3 1.5

3
= .1276.

u1
u2
u3
u4
u5

We could have solved directly for u6 in mixed model equations as follows.

.7763 .1053
.2632 .2632 .25
.0526 .0526
0

.4211 .0526 .0526 0


.2105 .2105
0

2.7632
1.
1. 1.7193 .6667
.6667

2.2632
0
.6667 1.3860
0

2.25
.6667
0
1.3333

1.8772 .6667
.6667

1.5439
0

1.3333
17

= [4.70, 2.21, 1.11, 1.84, 1.75, 1.58, .63, 0]0 .

The solution is (6.9909, 6.9959, .0545, -.0495, .0223, .2651, -.2601, .1276), and equals the
previous solution.
The predictor of the record on the second trait on animal 3 is some new 2 + u6 + e6 .
We already have u6 . We can predict e6 from e1 . . . e5 .
1
2
u1
u2
u3
u4
u5

e1
e2
e3
e4
e5

y1
y2
y3
y4
y5

1
1
1
0
0

0
0
0
1
1

1
0
0
0
0

0
1
0
0
0

0
0
1
0
0

0
0
0
1
0

0
0
0
0
1

1.0454
1.0586
.0132
1.7391
1.7358

Then e6 = (0 0 1 0 0) R1 (
e1 . . . e5 )0 = .0033. The column vector above is Cov [e6 , (e1 e2 e3 e4 e4 e5 )].
0
R above is V ar[(e1 . . . e5 ) ].
Suppose we had the same model as before but we have no data on the second trait.
We want to predict breeding values for both traits in the 3 animals, that is, u1 , . . . , u6 .
We also want to predict records on the second trait, that is, u4 + e4 , u5 + e5 , u6 + e6 . The
mixed model equations are

.75

.25
2.75

.25
1.
2.25

.25
0
0
0
1. 1.6667
.6667
.6667
0
.6667 1.3333
0
2.25
.6667
0 1.3333
1.6667 .6667 .6667
1.3333
0
1.3333

u1
u2
u3
u4
u5
u6

The solution is
[7.0345, .2069, .1881, .0846, .2069, .1881, .0846].
The last 6 values represent prediction of breeding values.

e1
y1

2 = y2 (X Z)
e

e3
y3

Then

u1
u2
u3

e4
1 0 0
4 0 0

5
e
= 0 1 0 0 4 0
e6
0 0 1
0 0 4
18

.8276

= .7774 .
.0502

e1
.2069

2
e
= .1944 .
e3
.0125

5.25
1.50
2.00
1.75
0
0
0

Then predictions of second trait records are

.2069
.2069

2 + .1881 + .1944 ,
.0846
.0125
but 2 is unknown.

13

A Singular Submatrix In G

Suppose that G can be partitioned as


G11 0
0
G22

G =

such that G11 is non-singular and G22 is singular. A corresponding partition of u0 is


0
0
(u1 u2 ). Then two additional methods can be used. First, solve (52)
X0 R1 X
X0 R1 Z1
X0 R1 Z2
0
0

0 1
1
1
Z1 R Z1 + G11 Z1 R1 Z2

Z1 R X
0
0
0
1
1
1
G22 Z2 R Z2 + I
G22 Z2 R X G22 Z2 R Z1

X0 R1 y
o

0 1

1 = Z1 R y
u
.
0
1
2
u
G22 Z2 R y

(52)

Let a g-inverse of this matrix be C. Then the prediction errors come from

I 0 0

C 0 I 0 .
0 0 G22

(53)

The symmetric counterpart of these equations is


X0 R1 X
X0 R1 Z1
X0 R1 Z2 G22
0
0
0

Z1 R1 Z1 + G1
Z1 R1 Z2 G22
Z1 R1 X

11
0
0
0
1
1
1
G22 Z2 R Z2 G22 + G22
G22 Z2 R X G22 Z2 R Z1

X0 R1 y
o

1
u
= Z1 R1 y
,
0
1
2

G22 Z2 R y

(54)

2 = G22
2.
and u
Let C be a g-inverse of the coefficient matrix
ances come from

I 0 0
I

0 I 0
C 0
0 0 G22
0
19

of (54). Then the variances and covari

0 0
I 0
.
0 G22

(55)

14

Prediction Of Future Records

Most applications of genetic evaluation are essentially problems in prediction of future


records, or more precisely, prediction of the relative values of future records, the relativity
arising from the fact that we may have no data available for estimation of future X, for
example, a year effect for some record in a future year. Let the model for a future record
be
0

yi = xi + zi u + ei .
0

(56)

and ei , BLUP
Then if we have available BLUE of xi = xi o and BLUP of u and ei , u
of this future record is
0
0
+ ei .
xi o + zi u
Suppose however that we have information on only a subvector of say 2 . Write
the model for a future record as
0

x1i 1 + x2i 2 + zi u + ei .
Then we can assert BLUP for only
0

x2i 2 + z02 u + ei .
But if we have some other record we wish to compare with this one, say yj , with
model,
0
0
0
yj = x1j 1 + x2j 2 + zj u + ej ,
we can compute BLUP of yi yj provided that
x1i = x1j .
It should be remembered that the variance of the error of prediction of a future record
(or linear function of a set of records) should take into account the variance of the error
of prediction of the error (or linear combination of errors) and also its covariance with
. See Section 8 for these variances and covariances. An extensive discussion of
o and u
prediction of future records is in Henderson (1977b).

15

When Rank of MME Is Greater Than n

In some genetic problems, and in particular individual animal multiple trait models, the
order of the mixed model coefficient matrix can be much greater than n, the number
of observations. In these cases one might wish to consider a method described in this
20

section, especially if one can thereby store and invert the coefficient matrix in cases when
the mixed model equations are too large for this to be done. Solve equations (57) for o
and s .
V X
X0 0

s
o

y
0

(57)

Then o is a GLS solution and


= GZ0 s
u

(58)

is BLUP of u. It is easy to see why these are true. Eliminate s from equations (57). This
gives
(X0 V1 X) o = X0 V1 y,
which are the GLS equations. Solving for s in (57) we obtain
s = V1 (y X o ).
Then GZ0 s = GZ0 V1 (y X o ), which we know to be BLUP of u.
Some variances and covariances from a g-inverse of the matrix of (57) are shown
below. Let a g-inverse be
!
C11 C12
.
0
C12 C22
Then
V ar(K0 o )
V ar(
u)
0 o
0)
Cov(K , u
Cov(K0 o , u0 )
0 u0 )
Cov(K0 o , u
V ar(
u u)

=
=
=
=
=
=

K0 C22 K.
GZ0 C11 VC11 ZG.
0
K0 C12 VC11 ZG = 0.
0
K0 C12 ZG
0
K0 C12 ZG.
G V ar(
u).

(59)
(60)
(61)
(62)
(63)
(64)

The matrix of (57) will often be too large to invert for purposes of solving s and o .
With mixed model equations that are too large we can solve by Gauss-Seidel iteration.
Because this method requires diagonals that are non-zero, we cannot solve (57) by this
, but not in o , an iterative method can be used.
method. But if we are interested in u
Subsection 4.2 presented a method for BLUP that is
0

= C V y .
u
Now solve iteratively
V s = y ,

21

(65)

then

= C s.
u

(66)

Remember that V has rank = n r. Nevertheless convergence will occur, but not to a
unique solution. V (and y ) could be reduced to dimension, n r, so that the reduced
V would be non-singular.
Suppose that
0

X =
0

C =

1 1 1 1 1
1 2 3 2 4

1 1 2 0 3
2 0 1 1 2

9 3 2 1

8 1 2

9 2
V=

1
2
1
2
8

,
,

y0 = [6 3 5 2 8].
by GZ0 V1 (y X o ).
First let us compute o by GLS and u
The GLS equations are
.335816 .828030
.828030 2.821936

1.622884
4.987475

( o )0 = [1.717054 1.263566].
From this
0 = [.817829 1.027132].
u
By the method of (57) we have equations

9 3 2 1

8 1 2

9 2

1
2
1
2
8

1
1
1
1
1
0

1
2
3
2
4
0
0

s
o

6
3
5
2
8
0
0

The solution is ( o )0 = same as for GLS,


s0 = (.461240 .296996 .076550 .356589.268895).

22

= C0 s = same as before. Next let us compute u


from different y . First let be
Then u
the solution to OLS using the first two elements of y . This gives
2 1 0 0 0
1
1 0 0 0

=
and

y =

0
0 0 0
0
0 0 0
1 2 1 0
0 1 0 1
2 3 0 0

or

0
0
0
0
1

y,

y = T0 y,

y = [0 0 5 1 11].
Using the last 3 elements of y gives

38 11 44
0
0

11 14
V =
, C =
72
Then

1 1 2
3
1 6

= C V1 y = same as before.
u
Another possibility is to compute by OLS using elements 1, 3 of y. This gives
=
and

1.5 0 .5 0 0
.5 0
.5 0 0

y,

y = [0 2.5 0 3.5 3.5].


Dropping the first and third elements of y ,

9.5 4.0 6.5


0

9.5 4.0
=
, C =
25.5

.5 1.5 .5
1.5 .5 1.5

.
This gives the same value for u
Finally we illustrate by GLS.
=
0

y =

.780362
.254522 .142119
.645995 .538760
.242894 .036176
.136951 .167959 .310078

3.019380, 1.244186, .507752, 2.244186, 1.228682


23

y.


3.268734 .852713
.025840 2.85713
.904393

4.744186 1.658915 1.255814 .062016

5.656331 .658915 3.028424


V =
.

3.744186 .062016

2.005168
0

C =

.940568
.015504
.090439 .984496 .165375
.909561 1.193798 .297158 .193798 .599483

= C V y . V has rank = 3, and one g-inverse is


Then u

0
0
0
0

.271363 .092077 .107220 0

.211736 .068145 0
.
.315035 0

the same as before.


This gives u
Another g-inverse is

1.372401 0 0 1.035917 .586957

0 0
0
0

.
0
0
0

1.049149 .434783

.75000
as before.
This gives the same u
0

It can be seen that when = o , a GLS solution, C0 V1 y = C V y . Thus if V


can be inverted to obtain o , this is the easier method. Of course this section is really
concerned with the situation in which V1 is too difficult to compute, and the mixed
model equations are also intractable.

16

Prediction When R Is Singular

If R is singular, the usual mixed model equations, which require R1 , cannot be used.
Harville (1976) does describe a method using a particular g-inverse of R that can be used.
Finding this g-inverse is not trivial. Consequently, we shall describe methods different
from his that lead to the same results. Different situations exist depending upon whether
X and/or Z are linearly independent of R.

24

16.1

X and Z linearly dependent on R

If R has rank t < n, we can write R with possible re-ordering of rows and columns as
!

R1
R1 L
L0 R1 L0 R1 L

R=

where R1 is t t, and L is t (n t) with rank (n t). Then if X, Z are linearly


dependent upon R,
!
!
X1
Z1
X=
, Z=
.
L0 X 1
L0 Z1
Then it can be seen that V is singular, and X is linearly dependent upon V. One could
by solving these equations
find o and u
V X
X0 0

s
o

y
0

(67)

= GZ0 s. See section 14. It should be noted that (67) is not a consistent set of
and u
equations unless
!
y1
y=
.
L0 y1
If X has full column rank, the solution to o is unique. If X is not full rank, K0 o
= GZ0 s is
is unique, given K0 is estimable. There is not a unique solution to s but u
unique.
Let us illustrate with
0

X = (1 2 3), Z =

1 2 3
2 1 3

, y0 = (5 3 8),

3 1 2
4 3
R=

, G = I.
5
Then

8 3 11

9 12
V = R + ZGZ =
,
23

which is singular. Then we find some solution to

8
3 11
1
3
9 12
2
11 12
23 3
1
2 3
0
25

s1
s2
s3
o

5
3
8
0

Three different solution vectors are


(14 7
0 54)/29,
(21
0
7 54)/29,
( 0 21 14 54)/29.
0 = (0 21)/29 and o = 54/29.
Each of these gives u
by setting up mixed model
We can also obtain a unique solution to K0 o and u
equations using y1 only or any other linearly independent subset of y. In our example let
us use the first 2 elements of y. The mixed model equations are

1 2

1 2
2 1

3 1
1
4

!1

1 1 2
2 2 1

1 2
o

1 = 1 2

u
u2
2 1

These are

3 1
1
4

0 0 0

+ 0 1 0

0 0 1
!

5
3

51
20 20 19
o

1 = 51
/11.
20 31 19 u
u2
60
19 19 34

111

The solution is (54, 0, 21)/29 as before.


If we use y1 , y3 we get the same equations as above, and also the same if we use y2 ,
y3

16.2

X linearly independent of V, and Z linearly dependent on


R

In this case V is singular but with X independent of V equations (67) have a unique
solution if X has full column rank. Otherwise K0 o is unique provided K0 is estimable.
In contrast to section 15.1, y need not be linearly dependent upon V and R. Let us use
the example of section 14.1 except now X0 = (1 2 3), and y0 = (5 3 4). Then the unique
solution is (s o )0 = (1104, 588, 24, 4536)/2268.

16.3

Z linearly independent of R

In this case V is non-singular, and X is usually linearly independent of V even though it


may be linearly dependent on R. Consequently s and K0 o are unique as in section 15.2.

26

17

Another Example of Prediction Error Variances

We demonstrate variances of prediction errors and predictors by the following example.


nij
Treatment Animals
1
2
1
2
1
2
1
3
Let
R = 5I, G =

2 1
1 3

The mixed model coefficient matrix is

1.4 .6 .8

.6 0

.6
.8
.4
.2
.2
.6
1.2 .2
1.2

(68)

and a g-inverse of this matrix is

0
0
0
0

3.33333 1.66667 1.66667 1.66667

3.19820 1.44144 2.1172


.
1.84685
1.30631

2.38739

Let K =

1 1 0
1 0 1

(69)

. Then

V ar

K0 o
u
u

K0
I2

[Matrix (69)] (K I2 )

3.33333 1.66667 1.66667 1.66667

3.19820 1.44144 2.1172

=
.

1.84685
1.30631
2.38739

3.33333 1.66667

3.198198

V ar

K0 o

27

0
0
0
0
.15315 .30631
.61261

(70)

(71)

The upper 2 x 2 is the same as in (70).


0 ) = 0.
Cov(K0 o , u
V ar(
u) = G V ar(
u u).
Let us derive these results from first principles.

.33333
.33333
.33333
0
.04504
.04504 .09009
.35135
.03604
.03604 .07207
.08108
.07207 .07207
.14414 .16216

0
0
0
.21622
.21622
.21622
.02703 .02703 .02703
.05405
.05405
.05405

(72)

computed by
K0
I2
Contribution of R to V ar

[matrix (71)]

K0 o

X0 R1
Z0 R1

= [matrix (72)] R [matrix (72)]0

1.6667
0
0
0

1.37935
.10348
.20696
=

.08278 .16557
.33114

(73)

For u in
K0 o

K0
I2

Contribution of G to V ar

K0 o

[matrix (72)] Z

.66667
.33333
.44144
.55856
.15315 .15315
.30631
.30631

(74)

= [matrix (74)] G [matrix (74)]0

1.6667 1.66662
0
0

1.81885 .10348
.20696
=

.07037 .14074
.28143
28

(75)

Then the sum of matrix (73) and matrix (75) = matrix (71). For variance of prediction
errors we need

Matrix (74)

0
0
1
0

0
0
0
1

.66667
.33333
.44144
.55856
.84685 .15315
.30631 .69369

(76)

Then contribution of G to prediction error variance is


[matrix (76)] G [Matrix (76)]0 ,

1.66667 1.66667 1.66667 1.66667

1.81885 1.54492 1.91015

.
=

1.76406
1.47188
2.05624

(77)

Then prediction error variance is matrix (73) + matrix (77) = matrix (70).

18

Prediction When u And e Are Correlated

In most applications of BLUE and BLUP it is assumed that Cov(u, e0 ) = 0. If this is not
the case, the mixed model equations can be modified to account for such covariances. See
Schaeffer and Henderson (1983).
Let
V ar

e
u

R
S0

S
G

(78)

Then
V ar(y) = ZGZ0 + R + ZS0 + SZ0 .

(79)

Let an equivalent model be


y = X + Tu + ,

(80)

where T = Z + SG1 ,
V ar

G
0

0
B

and B = R SG1 S0 . Then


V ar(y) = V ar(Tu + )
= ZGZ0 + ZS0 + SZ0 + SG1 S0 + R SG1 S0
= ZGZ0 + R + ZS0 + SZ0
29

(81)

as in the original model, thus proving equivalence. Now the mixed model equations are
X0 B1 X X0 B1 T
T0 B1 X T0 B1 T + G1

X0 B1 y
T0 B1 y

(82)

A g-inverse of this matrix yields the required variances and covariances for estimable
, and u
u.
functions of o , u
B can be inverted by a method analogous to
V1 = R1 R1 Z(Z0 R1 Z + G1 )1 Z0 R1
where V = ZGZ0 + R,
B1 = R1 + R1 S(G S0 R1 S)1 S0 R1 .

(83)

In fact, it is unnecessary to compute B1 if we instead solve (84).


X0 R1 y
o
X0 R1 X X0 R1 T
X0 R1 S

0 1

= T0 R1 y
.
T R X T0 R1 T + G1 T0 R1 S
u
0 1
0 1
0 1
0 1
SR y

SR X SR T
SR SG

(84)

This may not be a good set of equations to solve iteratively since (S0 R1 SG) is negative
definite. Consequently Gauss- Seidel iteration is not guaranteed to converge, Van Norton
(1959).
We illustrate the method of this section by an additive genetic model.

X=

1
1
1
1

1
2
1
4

1. .5 .25 .25

1. .25 .25

Z = I4 , G =

1. .5
1.

R = 4I4 ,

S = S0 = .9I4 , y0 = (5, 6, 7, 9).


From these parameters

2.88625

B=

and

.50625
2.88625

.10125
.10125
2.88625

.10125
.10125
.50625
2.88625

2.2375 .5625 .1125 .1125

2.2375 .1125 .1125

T = T0 =
.

2.2375
.5625
2.2375

30

Then the mixed model equations of (84) are

1.112656 2.225313
.403338
.403338
.403338
.403338

6.864946 .079365
1.097106 .660224
2.869187

3.451184 1.842933 .261705 .261705

3.451184 .261705 .261705

3.451184 1.842933
3.451184

1o
2o
u1
u2
u3
u4

7.510431
17.275150
1.389782
2.566252
2.290575
4.643516

The solution is (4.78722, .98139, .21423, .21009, .31707, .10725).


We could solve this problem by the basic method
o = (X0 V1 X) X0 V1 y,
and
= Cov (u, y0 )V1 (y X o ).
u
We illustrate that these give the same answers as the mixed model method.

6.8

V ar(y) = V =

.5 .25 .25
6.8 .25 .25

.
6.8 .5
6.8

Then the GLS equations are


.512821 1.025641
1.025641 2.991992

3.461538
7.846280

and
= (4.78722, .98139)0

as before.

Cov(u, y0 ) =

1.90

.50
1.90

.25
.25
1.90

.25
.25
.50
1.90

= GZ0 + S0 .

= (.768610, .750000, 1.231390, .287221).


(y X)
= (.21423, .21009, .31707, .10725)0 = (GZ0 + S0 )V1 (y X o )
u
as before.
31

19

Direct Solution To And u +T

In some problems we wish to predict w = u + T. The mixed model equations can be


modified to do this. Write the mixed model equations as (85). This can be done since
E(w T) = 0.
X0 R1 X X0 R1 Z
Z0 R1 X Z0 R1 Z + G1

o
w T o

X0 R1 y
Z0 R1 y

(85)

Re-write (85) as
o

X0 R1 X X0 R1 ZT X0 R1 Z
Z0 R1 X M
Z0 R1 Z + G1

"

X0 R1 y
Z0 R1 y

(86)

where M = (Z0 R1 Z + G1 )T. To obtain symmetry premultiply the second equation


by T0 and subtract this product from the first equation. This gives
X0 R1 X X0 R1 ZT T0 Z0 R1 X + T0 M X0 R1 Z M0
Z0 R1 X M
Z0 R1 Z + G1
o

X0 R1 y T0 Z0 R1 y
Z0 R1 y

C11 C12
0
C12 C22

Let a g-inverse of the matrix of (87) be

(87)

. Then

V ar(K0 o ) = K0 C11 K.
w) = C22 .
V ar(w
Hendersons mixed model equations for a selection model, equation (31), in Biomet!
X
rics (1975a) can be derived from (86) by making the following substitutions,
for
B
X, (0 B) for T, and noting that B = ZBu + Be .
We illustrate (87) with the following example.

X=

1
2
1
3

2
1
1
4

Z=

1
2
1
4

1
3
2
1

2
2
1
3

5 1 1 2

6 2 1

R=

7 1
8

3 1
3 1 1

0
4 2
G=
, T = 2 3 , y = (5, 2, 3, 6).
5
2 4
32

The regular mixed model equations are

1.576535 1.651127 1.913753 1.188811 1.584305

2.250194 2.088578 .860140 1.859363

2.763701 1.154009 1.822952

2.024882 1.142462

2.077104

2.651904
3.871018
3.184149
1.867133
3.383061

(88)

The solution is
(2.114786, 2.422179, .086576, .757782, .580739).
The equations for solution to and to w = u + T are

65.146040 69.396108 12.331273 8.607904 10.323684

81.185360 11.428959 10.938364 11.699391

2.763701
1.154009
1.822952

2.024882
1.142462

2.077104

17.400932
18.446775
3.184149
1.867133
3.383061

(89)

The solution is
(2.115, 2.422, 3.836, 3.795, 6.040).
+ T o of the previous solution gives w

This is the same solution to o as in (88), and u


of this solution. Further,
I
T

20

0
I

I
T

[inverse of (88)]

0
I

, = [inverse of (89)]

Derivation Of MME By Maximizing f (y, w)

This section describes first the method used by Henderson (1950) to derive his mixed
model equations. Then a more general result is described. For the regular mixed model
E

u
e

= 0, V ar

u
e
33

G
0

0
R

The density function is


f (y, u) = g(y | u) h(u),
and under normality the log of this is
k[(y X Zu)0 R1 (y X Zu) + u0 G1 u],
where k is a constant. Differentiating with respect to , u and equating to 0 we obtain
the regular mixed model equations.
Now consider a more general mixed linear model in which
y
w

X
T

with T estimable, and


!

y
w

V ar
with
V
C0

C
G

!1

V
C0

C
G

C11 C12
0
C12 C22

Log of f (y, w) is
k[(y X)0 C11 (y X) + (y X)0 C12 (w T)
0

+(w T)0 C12 (y X) + (w T)0 C22 (w T).


Differentiating with respect to and to w and equating to 0, we obtain
0

X0 C11 X + X0 C12 T + T0 C12 X + T0 C22 T (X0 C12 + T0 C22 )


(X0 C12 + T0 C22 )0
C22
o

X0 C11 y + T0 C12 y
0
C12 y

(90)

we obtain
Eliminating w
0

o
0
1
X0 (C11 C12 C1
22 C12 )X = X (C11 C12 C22 C12 )y.

But from partitioned matrix inverse results we know that


0

1
C11 C12 C1
22 C12 = V .

Therefore (91) are GLS equations and K0 o is BLUE of K0 if estimable.


from the second equation of (90).
Now solve for w
0

o
o
= C1
w
22 C12 (y X ) + T .
= C0 V1 (y X o ) + T o .
0
0 1
= BLUP of w because C1
22 C12 = C V .

34

(91)

0 1
To prove that C1
note that by the definition of an inverse C12 V+C22 C0 =
22 C12 = C V
1
0. Pre-multiply this by C1
to obtain
22 and post-multiply by V
0

0 1
0 1
C1
= 0 or C1
22 C12 + C V
22 C12 = C V .

We illustrate the method with the same example as that of section 18.

46

V = ZGZ0 + R =

Then from the inverse of

66 38 74

118 67 117

, C = ZG =

45 66
149

V
ZG
GZ0 G

6 9 13
11 18 18
6 11 10
16 14 21

, we obtain

.229215 .023310 .018648 .052059

.188811 .048951 .011655

.160839 .009324
.140637

C12 =
and

.044289
.258741
.006993
.477855

.069930 .236985
.433566 .247086

,
.146853
.002331
.034965 .285159

C22

C11

2.763701 1.154009 1.822952

2.024882 1.142462
=
.
2.077104

Then applying (90) to these results we obtain the same equations as in (89).
The method of this section could have been used to derive the equations of (82) for
Cov(u, e0 ) 6= 0.
f (y, u) = g(y | u) h(u).

E(y | u) = X + Tu, V ar(y | u) = B.


See section 17 for definition of T and B. Then
log g(y | u) h(u) = k(y X Tu)0 B1 (y X Tu) + u0 G1 u.
This is maximized by solving (82).

35

This method also could be used to derive the result of section 18. Again we make
use of f (y, w) = g(y | w) h(w).
E(y | w) = X + Z(w T).
V ar(y | w) = R.
Then
log g(y | w) h(w) = k[(y X + Zw ZT)0 R1 (y X + Zw ZT)]
+ (w T)0 G1 (w T).
This is maximized by solving equations (87).

36

Chapter 6
G and R Known to Proportionality
C. R. Henderson
1984 - Guelph

In the preceding chapters it has been assumed that V ar(u) = G and V ar(e) =
R are known. This is, of course, an unrealistic assumption, but was made in order to
present estimation, prediction, and hypothesis testing methods that are exact and which
may suggest approximations for the situation with unknown G and R. One case does
exist, however, in which BLUE and BLUP exist, and exact tests can be made even when
these variances are unknown. This case is G and R known to proportionality.
Suppose that we know G and R to proportionality, that is
G = G e2 ,
R = R e2 .

(1)

G and R are known, but e2 is not. For example, suppose that we have a one way mixed
model
0

yij = xij + ai + eij .


V ar(a1 a2 ...)0 = Ia2 .
V ar(e11 e12 ...)0 = Ie2 .
Suppose we know that a2 /e2 = . Then
G = Ia2 = I e2 .
R = Ie2 .
Then by the notation of (1)
G = I, R = I.

BLUE and BLUP

Let us write the GLS equations with the notation of (1).


V = ZGZ0 + R
= (ZG Z0 + R )e2
= V e2 .
1

Then X0 V1 X o = X0 V1 y can be written as


e2 X0 V1 X o = X0 V1 ye2 .

(2)

Multiplying both sides by e2 we obtain a set of equations that can be written as,
X0 V1 X o = X0 V1 y.

(3)

Then BLUE of K0 is K0 o , where o is any solution to (3).


Similarly the mixed model equations with each side multiplied by e2 are
0 1
X0 R1
X X R Z
1
0 1
Z0 R1
X Z R Z + G

X0 R1
y
0 1
Z R y

(4)

is BLUP of u when G and R are known.


u
To find the sampling variance of K0 o we need a g-inverse of the matrix of (2). This
is
(X0 V1 X) e2 .
Consequently,
V ar(K0 o ) = K0 (X0 V1 X) Ke2 .

(5)

Also
V ar(K0 o ) = K0 C11 Ke2 ,
where C11 is the upper p2 submatrix of a g-inverse of the matrix of (4). Similarly all of
the results of (34) to (41) in Chapter 5 are correct if we multiply them by e2 .
Of course e2 is unknown, so we can only estimate the variance by substituting some
estimate of e2 , say
e2 , in (5). There are several methods for estimating e2 , but the most
frequently used one is the minimum variance, translation invariant, quadratic, unbiased
estimator computed by
[y0 V1 y ( o )0 X0 V1 y]/[n rank(X)]

(6)

o 0 0 1
0 Z0 R1
[y0 R1
y]/[n rank(X)].
y ( ) X R y u

(7)

or by

A more detailed account of estimation of variances is presented in Chapters 10, 11, and
12.
of (4) is BLUP.
Next looking at BLUP of u under model (1), it is readily seen that u

Similarly variances and covariances involving u and u u are easily derived from the
results for known G and R. Let
!
C11 C12
C12 C22
2

be a g-inverse of the matrix of (4). Then


0 u0 ) = K0 C12 e2 ,
Cov(K0 o , u
V ar(
u) = (G C22 )e2 ,
V ar(
u u) = C22 e2 .

(8)
(9)
(10)

Tests of Hypotheses

In the same way in which G and R known to proportionality pose no problems in


BLUE and BLUP, exact tests of hypotheses regarding can be performed, assuming as
before a multivariate normal distribution. Chapter 4 describes computation of a quadratic,
s, that is distributed as 2 with m a degrees of freedom when the null hypothesis is
0
0
true, and m and a are the number of rows in H0 and Ha respectively. Now we compute
these quadratics exactly as in these methods except that V , G , R are substituted for
V, G, R. Then when the null hypothesis is true, s/
e2 (m a) is distributed as F with
m a, and nrank (X) degrees of freedom, where
e2 is computed by (6) or (7).

Power Of The Test Of Null Hypotheses

Two different types of errors can be made in tests of hypotheses. First, the null hypothesis
may be rejected when in fact it is true. This is commonly called a Type 1 error. Second,
the null hypothesis may be accepted when it is really not true. This is called a Type 2
error, and the power of the test is defined as 1 minus the probability of a Type 2 error.
The results that follow regarding power assume that G and R are known.
The power of the test can be computed only if
1. The true value of for which the power is to be determined is specified. Different
values of give different powers. Let this value be t . Of course we do not know
the true value, but we may be interested in the power of the test, usually for some
minimum differences among elements of . Logically t must be true if the null and
the alternative hypotheses are true. Accordingly a t must be chosen that violates
0
neither H00 = c0 nor Ha = ca .
2. The probability of the type 1 error must be specified. This is often called the chosen
significance level of the test.
3. The value of
e2 must be specified. Because the power should normally be computed
prior to the experiment, this would come from prior research. Define this value as
d.
3

4. X and Z must be specified.


Then let
A = significance level
F1 = m a = numerator d.f.
F2 = n rank(X) = denominator d.f.
Compute 4 = the quadratic, s, but with X t substituted for y in the computations.
Compute
P = [4/(m a + 1)d]1/2
(11)
and enter Tikus table (1967) with A, F1 , F2 , P to find the power of the test.
Let us illustrate computation of power by a simple one-way fixed model,
yij = + ti + eij ,
i = 1, 2, 3.
V ar(e) = Ie2 .
Suppose there are 3,2,4 observations respectively on the 3 treatments. We wish to test
0

H0 = 0,
where
0 1 0 1
0 0 1 1

H0 =

against the unrestricted hypothesis.


0

Suppose we want the power of the test for t = [10, 2, 1, 3] and e2 = 12. That
is, d = 12. Then
(X t )0 = [12, 12, 12, 11, 11, 7, 7, 7, 7].
As we have shown, the reduction under the null hypothesis in this case can be found from
the reduced model E(y) = 0 . The OLS equations are

9
3
2
4

3
3
0
0

2
0
2
0

4
0
0
4

o
to

86
36
22
28

A solution is (0, 12, 11, 7), and reduction = 870. The restricted equations are
9 o = 86,

and the reduction is 821.78. Then s = 48.22 = 4. Let us choose A = .05 as the
significance level
F1 = 2 0 = 2.
F2 = 9 3 = 6.
48.22
P =
= 1.157.
3(12)
Entering Tikus table we obtain the power of the test.

Chapter 7
Known Functions of Fixed Effects
C. R. Henderson
1984 - Guelph

In previous chapters we have dealt with linear relationships among of the


following types.
1. M0 is a set of p-r non-estimable functions of , and a solution to GLS or mixed
model equations is obtained such that M0 o = c.
2. K0 is a set of r estimable functions. Then we write a set of equations, the solution
to which yields directly BLUE of K0 .
3. H0 is a set of estimable functions that are used in hypothesis testing.
In this chapter we shall be concerned with defined linear relationships of the form,
T0 = c.
All of these are linearly independent. The consequence of these relationships is that
functions of may become estimable that are not estimable under a model with no
such definitions concerning . In fact, if T0 represents p r linearly independent
non-estimable functions, all linear functions of become estimable.

Tests of Estimability

If T0 represents t < p r non-estimable functions the following rule can be used to


determine what functions are estimable.
X
T0

C = 0,

(1)

where C is a p (r t) matrix with rank r t. C always exists. Then K0 is estimable


if and only if
K0 C = 0.
(2)

To illustrate, suppose that

X=

1
2
1
3
1
2

2
1
3
2
1
1

3
3
4
5
2
3

1
5
0
7
2
5

with p = 4, and r = 2 becuase

1
3
1 1

= 0
1
0
0 1

Suppose we define T0 = (1 2 2 1). This is a non-estimable function because


1
3
1 1

= (1 0) 6= 0.
1
0
0 1

(1 2 2 1)

Now

X
T0

C=

1
2
1
3
1
2
1

2
1
3
2
1
1
2

3
3
4
5
2
3
2

1
5
0
7
2
5
1

3
1
0
1

= 0.

Therefore K0 is estimable if and only if K0 (3 1 0 1)0 = 0. If we had defined


0

T =

1 2 2 1
2 1 1 3

any function of would be estimable because rank


4 2 non-estimable functions are defined.

,
X
T0

= 4. This is because p r =

BLUE when Subject to T0

One method for computing BLUE of K0 , estimable given T0 = c, is K0 o , where o is


a solution to either of the following.
X0 V1 X T
T0
0

X0 V1 y
c

(3)

X0 R1 X
X0 R1 Z
T
o
X0 R1 y
0 1

0 1

0 1
1
= Z R y .
0 u
(4)
ZR X ZR Z+G
0
T
0
0

c
0
If T represents p r linearly independent non-estimable functions, o has a unique
solution. A second method where c = 0 is the following. Partition T0 , with re-ordering
of columns if necessary, as
0
0
T0 = [T1 T2 ],

the re-ordering done, if necessary, so that T2 is non-singular. This of course implies that
0
T2 is square. Partition X = [X1 X2 ], where X2 has the same number of columns as T2
and with the same re-ordering of columns as in T0 . Let
0

W = X1 X2 (T02 )1 T1 .
Then solve for o , in either of the following two forms.
W0 V1 W o1 = W0 V1 y.

(5)

W0 R1 W W0 R1 Z
o1
W0 R1 y
=

Z0 R1 W Z0 R1 Z + G1
u
Z0 R1 y
In terms of the model with no definitions on the parameters,
!

E( o1 ) = (W0 V1 W) W0 V1 X.
0
0
o2 = (T2 )1 T1 o1 .
0
E( o2 ) = (T2 )1 T01 E( o1 ).

(6)

(7)
(8)
(9)

Let us illustrate with the same X used for illustrating estimability when T0 is
defined. Suppose we define
0

T =

1 2 2 1
2 1 1 3

, c = 0.

then T0 are non-estimable functions. Consequently the following GLS equations have a
unique solution. It is assumed that V ar(y) = Ie2 . The equations are

2
e

20
16
36
44
1
2

16
20
36
28
2
1

36 44 1 2
36 28 2 1
72 72 2 1
72 104 1 3
2
1 0 0
1
3 0 0
3

46
52
98
86
0
0

2
.
e

(10)

and y0 = [5, 3, 7, 2, 6, 8].


The solution is [380, 424, 348, 228, 0, 0]/72. If T0 = 0 is really true, any linear
function of is estimable.
By the method of (5)
0

X1 =
0

X2 =
0

T1 =

1 2 1 3 1 2
2 1 3 2 1 1

3 3 4 5 2 3
1 5 0 7 2 5

1 2
2 1

2 1
1 3

, T2 =

then
0

W =

.2 1.6
.2 2.2 .6 1.6
1.0 2.0 1.0 3.0 1.0 2.0

Equations like (5) are


e2

10.4 13.6
13.6 20.0

o1

25.2
46.0

e2 .

The solution is o1 = (380, -424)/72. By (8)


o2

.2 1.0
.6
0

380
424

348
228

/72 =

/72.

These are identical to the result by method (3). E( o1 ) by (7) is


0
2.5
2.5 2.5
1.0 2.5 3.5 .5

It is easy to verify that these are estimable under the restricted model.
At this point it should be noted that the computations under T0 = c, where these
represent pr non-estimable functions are identical with those previously described where
the GLS or mixed model equation solution is restricted to M0 o = c. However, all linear
functions of are estimable under the restriction regarding parameters whereas they are
not when these restrictions are on the solution, o . Restrictions M0 o = c are used only
for convenience whereas T0 = c are used because that is part of the model.
Now let us illustrate with our same example, but with only one restriction, that being
(2 1 1 3) = 0.
4

Then equations like (3) are

2
e

20
16
36
44
2

16
20
36
28
1

36 44 2
36 28 1
72 72 1
72 104 3
1
3 0

46
52
98
86
0

e2 .

These do not have a unique solution, but one solution is (-88, 0, 272, -32, 0)/144. By the
method of (5)
0

T1 = (2 1 1),
0
T2 = 3.
This leads to

102
68 52 32

o
1 2
9 e 52 116 128 1 = 210 91 e2 .
624
32 128 320
These do not have a unique solution but one solution is (-88 0 272)/144 as in the other
method for o1 .

Sampling Variances

If the method of (3) is used,


V ar(K0 o ) = K0 C11 K,

(11)

where C11 is the upper p2 submatrix of a g-inverse of the coefficient matrix. The same is
true for (4).
If the method of (5) is used
0

V ar(K1 o1 ) = K1 (W0 V1 W) K1 .
0
0
0
Cov(K1 o1 , o2 K2 ) = K1 (W0 V1 W) T1 T1
2 K2 .
0
0
0 1
0
o
0 1
V ar(K2 2 ) = K2 (T2 ) T1 (W V W) T1 T1
2 K2 .

(12)
(13)
(14)

If the method of (6) is used, the upper part of a g-inverse of the coefficient matrix is used
in place of (11).
Let us illustrate with the same example and with one restriction. A g-inverse of the
coefficient matrix is

0
0
0
0
0

0
80 32 16
576
2

e
29
1 432
0 32
.
576
1
5
144
0 16

0 576 432 144


0
5

Then

0
80 32 16
0 o
0
29
1
V ar(K ) =
K 0 32
K.
576
0 16
1
5
e2

Using the method of (5) a g-inverse of W0 V1 W is

0
0
0
e2

80 32
,
0

576
0 32
29
which is the same as the upper 3 3 of the matrix above. From (13)

(W0 V1 W) T1 T1
2
0

0
0
0
2
0
1

1
80 32
=
0
1 = 16
576
3
576
0 32
29
1
1

and for (14) (T2 )1 T1 (W0 V1 W) T1 T1


= 5/576, thus verifying that the sampling
2
variances are the same by the two methods.

Hypothesis Testing
0

As before let H0 = c0 be the null hypothesis and Ha = ca be the alternative hypothesis,


0
0
but now we have defined T0 = c. Consequently H0 and Ha need be estimable only
when T0 = c is assumed.
Then the tests proceed as in the unrestricted model except that for the null hypothesis
computations we substitute
0

H0
T0

c0
c

for H0 c0 .

(15)

and for the alternative hypothesis we substitute


0

Ha
T0

ca
c

for Ha ca .

To illustrate suppose the unrestrained GLS equations are

6
3
2
1

3
7
1
2

2
1
8
1

1
2
1
9

o =

9
12
15
16

(16)

Suppose that we define T0 = 0 where T0 = (3 1 2 3).


0

We wish to test H0 = 0, where


H0 =
0

1 2 1 0
2 1 0 1

against Ha = 0, where Ha = [1 -1 -1 1]. Note that (-1 1) H0 = Ha and both are


estimable. Therefore these are valid hypotheses. Using the reduction method we solve

6
3
2 1
1 3
3
7
1 2 1 1

2
1
8 1 1 2

1
2
1 9
1 3

1 1 1 1
0 0
3
1
2 3
0 0

a
a

9
12
15
16
0
0

The solution is [-1876, 795, -636, 2035, -20035, 20310]/3643, and the reduction under Ha
is 15676/3643 = 4.3030. Then solve

6
3
2
1
1
2
3

3
7
1
2
2
1
1

2
1
8
1
1
0
2

1
2
1
9
0
1
3

1
2
1
0
0
0
0

2
1
0
1
0
0
0

3
1
2
3
0
0
0

0
0

9
12
15
16
0
0
0

The solution is [-348, 290, -232, 406, 4380, -5302, 5088]/836, and the reduction is 3364/836
= 4.0239. Then we test 4.3030 - 4.0239 = .2791 entering 2 with 1 degree of freedom
0
0
coming from the differences between the number of rows in H0 and Ha .
0

By the method involving V ar(Ho ) and V ar(Ha ) we solve the following equations
and find a g-inverse of the coefficient matrix.

6
3
2
1
3

3
7
1
2
1

2
1
8
1
2

1
2
1
9
3

3
1
2
3
0

o
0

9
12
15
16
0

The solution is [-7664, 8075, 5561, 1265, 18040]/4972. The inverse is

624 276 336 308

887
53 55

659 121

407

1012
352
220
616
2024

/4972.

Now

H0 o = [2.82522 1.20434]0 ,

H0 C11 H0 =

.65708 .09735
.27031

and

(H0 C11 H0 )

1.60766 .57895
3.90789

= B,

where C11 is the upper 4 4 submatrix of the inverse of the coefficient matrix. Then
[2.82522 1.20434] B [2.8522 1.20434]0 = 22.44007.
0

Similarly computations with Ha = (1 -1 -1 1), give Ha a = -4.02957, B = 1.36481, and


(4.02957)B(4.02957) = 22.16095. Then 22.44007 - 22.16095 = .2791 as before.

Chapter 8
Unbiased Methods for G and R Unknown
C. R. Henderson
1984 - Guelph

Previous chapters have dealt with known G and R or known proportionality


of these matrices. In these cases BLUE, BLUP, exact sampling variances, and exact tests
of hypotheses exist. In this chapter we shall be concerned with the unsolved problem of
what are best estimators and predictors when G and R are unknown even to proportionality. We shall construct many unbiased estimators and predictors and under certain
circumstances compute their variances. Tests of hypotheses pose more serious problems,
for only approximate tests can be made. We shall be concerned with three different
situations regarding estimation and prediction. These are described in Henderson and
Henderson (1979) and in Henderson, Jr. (1982).
1. Methods of estimation and prediction not involving G and R.
and R
are used in the
2. Methods involving G and R in which assumed values, say G
computations and these are regarded as constants.
and R
are regarded more realistically as estimators
3. The same situation as 2, but G
from data and consequently are random variables.

Unbiased Estimators

Many unbiased estimators of K0 can be computed. Some of these are much easier
and R
used. Also some of them are invariant to G

than GLS or mixed models with G


The first, and one of the easiest, is ordinary least squares (OLS) ignoring u.
and R.
Solve for o in
X0 X o = X0 y.

(1)

Then E(K0 o ) = E[K0 (X0 X) X0 y] = K0 (X0 X) X0 X = K0 if K0 is estimable. The


variance of K0 o is
K0 (X0 X) X0 (ZGZ0 + R)X(X0 X) K,
(2)
R,
but it is valid only if G
and R
are
and this can be evaluated easily for chosen G,
regarded as fixed.

A second estimator is analogous to weighted least squares. Let D be a diagonal


Then solve
0 + R).
matrix formed from the diagonals of (ZGZ
X0 D1 X o = X0 D1 y.

(3)

K0 o is an unbiased estimator of K0 if estimable.


V ar(K0 o ) = K0 (X0 D1 X) X0 D1 (ZGZ0 + R)D1 X(X0 D1 X) K.

(4)

1 is easy to compute, but V


1 is not easy, is to solve
A third possibility if R
1 X o = XR
1 y.
X0 R

(5)

1 X) K.
1 (ZGZ0 + R)R
1 X(X0 R
1 X) X0 R
V ar(K0 o ) = K0 (X0 R

(6)

These methods all would seem to imply that the diagonals of G1 are large relative
to diagonals of R1 .
Other methods would seem to imply just the opposite, that is, the diagonals of
G are small relative to R1 . One of these is OLS regarding u as fixed for purposes of
computation. That is solve
1

X0 X X 0 Z
Z0 X Z0 Z

o
uo

X0 y
Z0 y

(7)

Then if K0 is estimable under a fixed u model, K0 o is an unbiased estimator of K0 .


However, if K0 is estimable under a random u model, but is not estimable under a
fixed u model, K0 o may be biased. To forestall this, find a function K0 + M0 u that is
estimable under a fixed u model. Then K0 o + M0 uo is an unbiased estimator of K0 .
0

K
M

V ar(K + M u ) = [K M ]CW (ZGZ + R)WC


0

= (K M0 )CW0 RWC

K
M

(8)

+ M0 GM,

(9)

where C is a g-inverse of the matrix of (7) and W = (X Z).


The method of (9) is simpler than (8) if R has a simple form compared to ZGZ0 . In
fact, if R = Ie2 , the first term of (9) becomes
0

K
M

(K M )C

e2 .

(10)

Analogous estimators would come from solving


1 X X0 R
1 Z
X0 R
1 X Z0 R
1 Z
Z0 R

o
uo
2

1 y
X0 R
1 y
Z0 R

(11)

Another one would use D1 in place of R1 where D is a diagonal matrix formed


In both of these last two methods K0 o + M0 uo would be the
from the diagonals of R.
estimator of K0 , and we require that K0 + M0 u be estimable under a fixed u model.
From (11)
1 (ZGZ0 + R)R
1 WC
V ar(K0 o + M0 uo ) = (K0 M0 )CW0 R
0

K
M

= (K M )CW R RR WC

K
M

+M0 GM.

(12)

1 the expression in (12) is altered by making this same


When D1 is substituted for R
substitution.
Another method which is a compromise between (1) and (11) is to ignore a subvector
of u, say u2 , then compute by OLS regarding the remaining subvector of u, say u1 , as
fixed. The resulting equations are
X0 X X0 Z1
0
0
Z1 X Z1 Z1

o
uo1

X0 y
0
Z1 y
0

(13)

(Z1 Z2 ) is a partitioning of Z corresponding to u0 = (u1 u2 ). Now to insure unbiasedness


of the estimator of K0 we need to find a function,
K0 + M0 u1 ,
that is estimable under a fixed u1 model. Then the unbiased estimator of K0 is
K0 o + M0 uo1 ,
The variance of this estimator is
0

(K M )CW (ZGZ + R)WC

K
M

(14)

W = (X Z1 ), and ZGZ0 refers to the entire Zu vector, and C is some g-inverse of the
matrix of (13).
Let us illustrate some of these methods with a simple example.
X0 = [1 1 1 1 1],

1 1 0 0 0

Z0 = 0 0 1 1 0 , R = 15I, G = 2I,
0 0 0 0 1
3

y0 = [6 8 7 5 7].

17

2
17

V ar(y) = ZGZ0 + R =

0
0
17

0
0
2
17

0
0
0
0
17

is estimable. By the method of (1) we solve


5 o = 33.
o = 6.6.

V ar( ) = .2 (1 1 1 1 1) Var

(y)

1
1
1
1
1

.2 = 3.72.

By the method of (7) the equations to be solved are


5 2 2 1

2 0 0

2 0
1

o
uo

33
14
12
7

A solution is (0, 7, 6, 7). Because is not estimable!when u is fixed, we need some

function with k0 = 1 and m0 such that (k0 m0 )


is estimable. A possibility is
u
(3 1 1 1)/3. The resulting estimate is 20/3 6= 6.6, our previous estimate. To find the
variance of the estimator by method (8) we can use a g-inverse.
0

0
.5

0 0
0 0

.
.5 0
1

(k0 m0 )CW0 =

1
1
0
0

1
1
0
0

1
0
1
0

1
0
1
0

(3 1 1 1)

1
0
0
1

0
.5

0 0
0 0

.5 0
1

1
(1 1 1 1 2).
6

Then V ar( o ) = 4 6=
3.333 + .667 = 4 also.

3.72 of previous result. By the method of (9) we obtain

BLUE would be obtained by using the mixed model equations with R = 15I,
G = 2I if these are the true values of R and G. The resulting equations are

1
15

5
2
2
1

2
9.5
0
0

2
0
9.5
0

1
0
0
8.5

o
u

33
14
12
7

/15.

o = 6.609.
The upper 1 1 of a g-inverse is 3.713, which is less than for any other methods, but
of course depends upon true values of G and R.

Unbiased Predictors

The method for prediction of u used by most animal breeders prior to the recent
general acceptance of the mixed model equations was selection index (BLP) with some
Then
estimate of X regarded as a parameter value. Denote the estimate of X by X.
the predictor of u is

0V
1 (y X).
= GZ
u

(15)

and V
are estimated G and V.
G
This method utilizes the entire data vector and the entire variance-covariance structure to predict. More commonly a subset of y was chosen for each individual element of
u to be predicted, and (15) involved this reduced set of matrices and vectors.
is an unbiased estimator of X, E(
Now if X
u) = 0 = E(u) and is unbiased. Even
if G and R were known, (15) would not represent a predictor with minimum sampling
should be a GLS solution. Further,
variance. We have already found that for this
in selection models (discussed in chapter 13), usual estimators for such as OLS or
is no longer an unbiased predictor.
estimators ignoring u are biased, so u
Another unbiased predictor, if computed correctly, is regressed least squares first
reported by Henderson (1948). Solve for uo in equations (16).
X0 X X 0 Z
Z0 X Z 0 Z

o
uo
5

X0 y
Z0 y

(16)

Take a solution for which E(uo ) = 0 in a fixed but random u model. This can be
done by absorbing o to obtain a set of equations
Z0 PZ uo = Z0 Py,

(17)

where
P = [I X(X0 X) X0 ].
Then any solution to uo , usually not an unique solution, has expectation 0, because
E[I X(X0 X) X0 ]y = (X X(X0 X) X0 X) = (X X) = 0. Thus uo is an unbiased
predictor, but not a good one for selection, particularly if the amount of information
differs greatly among individuals.
Let some g-inverse of Z0 PZ be defined as C. Then
V ar(uo ) = CZ0 P(ZGZ0 + R)PZC,

(18)

Cov(u, uo ) = GZ0 PZC.

(19)

Let the ith diagonal of (18) be vi , and the ith diagonal of (19) be ci , both evaluated by
some estimate of G and R. Then the regressed least square prediction of ui is
ci uoi /vi .

(20)

This is BLP of ui when the only observation available for prediction is uoi . Of course other
data are available, and we could use the entire uo vector for prediction of each ui . That
would give a better predictor because (18) and (19) are not diagonal matrices.
In fact, BLUP of u can be derived from uo . Denote (18) by S and (19) by T. Then
BLUP of u is
TS uo ,
(21)
provided G and R are known. Otherwise it would be approximate BLUP.
This is a cumbersome method as compared to using the mixed model equations,
but it illustrates the reason why regressed least squares is not optimum. See Henderson
(1978b) for further discussion of this method.

Substitution Of Fixed Values For G And R

In the methods presented above it appears that some assumption is made concerning
the relative values of G and R. Consequently it seems logical to use a method that
and R
approach G and R. This would be to substitute
approaches optimality as G
and R
for the corresponding parameters in the mixed model equations. This is a
G
procedure which requires no choice among a variety of unbiased methods. Further, it has
6

and R
are fixed, the estimated sampling variance and
the desirable property that if G
prediction error variances are simple to express. Specifically the variances and covariances
and R = R
are precisely the results in (34) to (41) in Chapter 5.
estimated for G = G
It also is true that the estimators and predictors are unbiased. This is easy to prove
and R
but for estimated (random) G
and R
we need to invoke a result by
for fixed G
and R
note that after
Kackar and Harville (1981) presented in Section 4. For fixed G
absorbing u from the mixed model equations we have
1 y.
1 X o = X0 V
X0 V
Then
1 y)
1 X) X0 V
E(K0 o ) = E(K0 (X0 V
1 X) X0 V
1 X
= K0 (X0 V
= K0 .
Also
1 Z + G
1 )1 Z0 R
1 (y X o ).
= (Z0 R
u
But X o is an unbiased estimator of X, y X o with expectation 0 and consequently
E(
u) = 0 and is unbiased.

Mixed Model Equations With Estimated G and R

from mixed model


It is not a trivial problem to find the expectations of K0 o and u
equations with estimated G and R. Kackar and Harville (1981) derived a very important
result for this case. They prove that if G and R are estimated by a method having the
following properties and substituted in mixed model equations, the resulting estimators
and predictors are unbiased. This result requires that
1. y is symmetrically distributed, that is, f (y) = f (y).
2. The estimators of G and R are translation invariant.
3. The estimators of G and R are even functions of y.
These are not very restrictive requirements because they include a variety of distributions
of y and most of the presently used methods for estimation of variances and covariances.
An interesting consequence of substituting ML estimates of G and R for the corresponding parameters of mixed model equations is that the resulting K0 o are ML and the
are ML of (u | y).
u
7

Tests Of Hypotheses Concerning

We have seen that unbiased estimators and predictors can be obtained even though
G and R are unknown. When it comes to testing hypotheses regarding little is known
except that exact tests do not exist apart from a special case that is described below.
The problem is that quadratics in H0 o c appropriate for exact tests when G and
R
replace
R are known, do not have a 2 or any other tractable distribution when G,
G, R in the computation. What should be done? One possibility is to estimate, if
possible G, R, by ML and then invoke a likelihood ratio test, in which under normality
assumptions and large samples, -2 log likelihood ratio is approximated by 2 . This raises
the question of what is a large sample of unbalanced data. Certainly n is not
a sufficient condition. Consideration needs to be given to the number of levels of each
subvector of u and to the proportion of missing subclasses. Consequently the value of a
2 approximation to the likelihood ratio test is uncertain.
= G and R
= R and
A second and easier approximation is to pretend that G
2
proceed to an approximate test using as described in Chapter 4 for hypothesis testing
with known G, R and normality assumptions. The validity of this test must surely
depend, as it does in the likelihood ratio approximation, upon the number of levels of u
and the balance and lack of missing subclasses.
One interesting case exists in which exact tests of can be made even when we do
not know G and R to proportionality. The requirements are as follows
1. V ar(e) = Ie2 , and
0

2. H0 is estimable under a fixed u model.


Solve for o in equations (7). Then
0

V ar(H0 o ) = H0 C11 H0 e2

(22)

where C11 is the upper p p submatrix of a g-inverse of the coefficient matrix. Then
under the null hypothesis versus the unrestricted hypothesis
0

(H0 o )

2e
[H0 C11 H0 ]1 H0 o /s

(23)

is distributed as F with degrees of freedom s, nrank (X Z).


e2 is an estimate of e2
computed by
(y0 y ( o )0 X0 y (uo )0 Z0 y)/[n rank(X Z)],
(24)
0

and s is the number of rows, linearly independent, in H0 .

Chapter 9
Biased Estimation and Prediction
C. R. Henderson
1984 - Guelph

All methods for estimation and prediction in previous chapters have been unbiased. In this chapter we relax the requirement of unbiasedness and attempt to minimize
the mean squared error of estimation and prediction. Mean squared error refers to the sum
of prediction error variance plus squared bias. In general, biased predictors and estimators exist that have smaller mean squared errors than BLUE and BLUP. Unfortunately,
we never know what are truly minimum mean squared error estimators and predictors
because we do not know some of the parameters required for deriving them. But even
for BLUE and BLUP we must know G and R at least to proportionality. Additionally
for minimum mean squared error we need to know squares and products of at least
proportionally to G and R.

Derivation Of BLBE And BLBP


0

Suppose we want to predict k1 1 + k2 2 + m0 u by a linear function of y, say a0 y,


0
such that the predictor has expectation k1 1 plus some function of 2 , and in the class
of such predictors, has minimum mean squared error of prediction, which we shall call
BLBP (best linear biased predictor).
The mean squared error (MSE) is
0

a0 Ra + (a0 X2 k2 ) 2 2 (X2 a k2 ) + (a0 Z m0 )G(Z0 a m).


0

(1)

In order that E(a0 y) contains k1 1 it is necessary that a0 X1 1 = k1 1 , and this will be


0
true for any 1 if a0 X1 = k1 . Consequently we minimize (1) subject to this condition.
Differentiating (1) with respect to a and to an appropriate Lagrange Multiplier, we have
equations (2) to solve.
0

V + X 2 2 2 X 2 X1
0
X1
0

ZGm + X2 2 2 k2
k1

(2)

a has a unique solution if and only if k1 1 is estimable under a model in which E(y)
contains X1 1 . The analogy to GLS of 1 is a solution to (3).
0

X1 (V + X2 2 2 X2 )1 X1 1 = X1 (V + X2 2 2 X2 )1 y.
1

(3)

Then if K1 1 is estimable under a model, E(y) containing X1 1 , K1 1 is unique and is


0
the minimum MSE estimator of K1 1 . The BLBE of 2 is
0

2 2 X2 (V + X2 2 2 X2 )1 (y X1 1 )

(4)

2 , and this is unique provided K1 1 is estimable when E(y) contains X1 1 . The


BLBP of u is
0
0
(5)
u = GZ0 (V + X2 2 2 X2 )1 (y X1 1 ),
and this is unique. Furthermore BLBP of
0

K1 1 + K2 2 + M0 u is K1 1 + K2 2 + M0 u .

(6)

We know that BLUE and BLUP can be computed from mixed model equations.
Similarly 1 , 2 , and u can be obtained from modified mixed model equations (7), (8),
0
or (9). Let 2 2 = P. Then with P singular we can solve (7).
0

X1 R1 X1
X1 R1 X2
X1 R1 Z
0
0
0

PX2 R1 X1 PX2 R1 X2 + I PX2 R1 Z


Z0 R1 X1
Z0 R1 X2
Z0 R1 Z + G1

X1 R1 y
1
0


2 = PX2 R1 y
u
Z0 R1 y

(7)

The rank of this coefficient matrix is rank (X1 )+p2 +q, where p2 = the number of elements
in 2 . The solution to 2 and u is unique but 1 is not unless X1 has full column rank.
Note that the coefficient matrix is non-symmetric. If we prefer a symmetric matrix, we
can use equations (8).
0

X1 R1 X1
X1 R1 X2 P
X1 R1 Z
0
0
0

PX2 R1 X1 PX2 R1 X2 P + P PX2 R1 Z


Z0 R1 X1
Z0 R1 X2 P
Z0 R1 Z + G1

X1 R1 y
1
0

2 = PX2 R1 y
u
Z0 R1 y

(8)
0

Then 2 = P2 . The rank of this coefficient matrix is rank (X1 ) + rank (P) + q. K1 1 ,
2 , and u are identical to the solution from (7). If P were non-singular we could use
equations (9).
0

X1 R1 X1 X1 R1 X2
X1 R1 Z
0
0
0

X2 R1 X1 X2 R1 X2 + P1 X2 R1 Z
0 1
0 1
0 1
1
Z R X 1 Z R X2
ZR Z+G

1
X1 R1 y
0

2 = X2 R1 y
u
Z0 R1 y

(9)

The rank of this coefficient matrix is rank (X1 ) + p2 + q.


Usually R, G, and P are unknown, so we need to use guesses or estimates of them,

and P.
These would be used in place of the parameter values in (2) through
say R, G,
(9).
In all of these except (9) the solution to 2 has a peculiar and seemingly undesirable
, where k is some constant. That is, the elements of are
property, namely 2 = k
2
2
2 . Also it should be noted that if, as should always be
proportional to the elements of
the case, P is positive definite or positive semi-definite, the elements of 2 are shrunken
(are nearer to 0) compared to the elements of the GLS solution to 2 when X2 is full
column rank. This is comparable to the fact that BLUP of elements of u are smaller in
absolute value than are the corresponding GLS computed as though u were fixed. This
last property of course creates bias due to 2 but may reduce mean squared errors.

Use Of An External Estimate Of

We next consider methods for utilizing an external estimate of in order to obtain


a better unbiased estimator from a new data set. For this purpose it will be simplest to
assume that in both the previous experiments and the present one the rank of X is r p
and that the same linear dependencies among columns of X existed in both cases. With
possible re-ordering the full rank subset is denoted by X1 and the corresponding by 1 .
Suppose we have a previous solution to 1 denoted by 1 and E( 1 ) = 1 + L 2 where
X = (X1 X2 ) and X2 = (X1 L). Further V ar( 1 ) = V1 . Assuming logically that the
prior estimator is uncorrelated with the present data vector, y, the GLS equations are
0
= X0 V1 y + V1 .
(X1 V1 X1 + V11 )
1
1
1
1
0

(10)

, and its variance is


Then BLUE of K0 , where K0 has the form (K1 K1 L) is K1
1
0

K1 (X1 V1 X1 + V11 )1 K1 .

(11)

The mixed model equations corresponding to (10) are


0

X1 R1 X1 + V11 X1 R1 Z
Z0 R1 X1
Z0 R1 Z + G1

X1 R1 y + V11 1
Z0 R1 y

(12)

Assumed Pattern Of Values Of

The previous methods of this chapter requiring prior values of every element of
and resulting estimates with the same proportionality as the prior is rather distasteful.
A possible alternative solution is to assume a pattern of values of with less than p
3

parameters. For example, with two way, fixed, cross-classified factors with interaction we
might assume in some situations that there is no logical pattern of values for interactions.
Defining for convenience that the interactions sum to 0 across each row and each column,
and then considering all possible permutations of the labelling of rows and columns, the
following is true for the average squares and products of these interactions. Define the
interaction for the ij th cell as ij and define the number of rows as r and the number of
columns as c. The average values are as follows.
2
ij
ij ij 0
ij i0 j
ij i0 j 0

=
=
=
=

,
/(c 1),
/(r 1),
/(c 1)(r 1).

(13)
(14)
(15)
(16)

Then if we have some prior value of we can proceed to obtain locally minimum
mean squared error estimators and predictors as follows. Let P = estimated average
0
value of 2 2 . Then solve equations (7), (8) or (9).

Evaluation Of Bias

If we are to consider biased estimation and prediction, we should know how to evaluate
the bias. We do this by looking at expectations. A method applied to (7) is as follows.
0
Remember that K1 1 is required to have expectation, K1 1 + some linear function of 2 .
0
For this to be true K1 1 must be estimable under a model with X2 2 not existing. 2
and u are required to have expectation that is some linear function of 2 .
Let some g-inverse of the matrix of (7) be

C11 C12 C13


C1

C21 C22 C23 = C2 .


C31 C32 C33
C3
Then

E(K1 1 ) = K1 1 + K1 C1 T 2 ,
where

(17)

(18)

1 X2
X1 R
0

1 X2
T = PX2 R
.
0 1
Z R X2

E( 2 ) = C2 T 2 .
E(u ) = C3 T 2 .
4

(19)
(20)

Then the biases are as follows.


0

For K1 1 , bias = K1 C1 T 2 .
For 2 , bias = (C2 T I) 2 .
For u , bias = C3 T 2 .

(21)
(22)
(23)

If the equations (8) are used, the biases are the same as in (21), (22), and (23) except
that (22) is premultiplied by P, and C refers to a g-inverse of the matrix of (8). If the
0
1 X2 , and C refers to the inverse
equations of (9) are used, the second term of T is X2 R
of the matrix of (9).

Evaluation Of Mean Squared Errors

If we are to use biased estimation and prediction, we should know how to estimate
mean squared errors of estimation and prediction. For the method of (7) proceed as
follows. Let

0
1 X2
X1 R
0

1 X2
(24)
T = PX2 R
.
1 X2
Z0 R
Note the similarity to the second column of the matrix of (7). Let
0

1 Z
X1 R
0

1 Z
S = PX2 R
.
0 1
ZR Z

(25)

Note the similarity to the third column of the matrix of (7). Let
0

1
X1 R
0

1
H = PX2 R
.
0 1
ZR

(26)

Note the similarity to the right hand side of (7). Then compute

C1 T
0

0 0
C2 T I 2 2 (T C1
C3 T

T0 C2 I T0 C3 )

C1 S

0 0
+ C2 S
G (S C1
C3 S I

S0 C2

S0 C3 I)

C1 H

0 0
+
C2 H R (H C1
C3 H
5

H0 C2

H0 C3 )

B11 B12 B13

= B21 B22 B23 = B.


B31 B32 B33

(27)

Then mean squared error of


1
0

M3 )

u u

(M1 M2

= (M1 M2

M1
0

M3 ) B M2 .
M3

(28)

Of course this cannot be evaluated numerically except for assumed values of , G, R.


The result simplifies remarkably if we evaluate at the same values used in (7), namely
0
G = G,
R = R.
Then B is simply
2 = P,
2

C11 C12 P C13


I 0 0

C 0 P 0 = C21 C22 P C23 .


C31 C32 P C33
0 0 I

(29)

C and Cij are defined in (9.17).


When the method of (8) is used, modify the result for (7) as follows. Let a g-inverse
of the matrix of (8) be

C11 C12 C13


C1

C21 C22 C23 = C2 = C.


C31 C32 C33
C3

(30)

2 T I for C2 T I, PC
2 S for C2 S, and PC
2 H for C2 H and proceed as
Substitute PC

in (28) using the Ci from (29). If P = P, G = G, R = R, B simplifies to

I 0 0
I 0 0

0
P
0

C 0 P 0 .
0 0 I
0 0 I

(31)

If the method of (9) is used, delete P from T, S, and H in (24), (25), and (26), let C

be a g-inverse of the matrix of (9), and then proceed as for method (7). When P = P,

G = G, and R = R, the simple result, B = C can be used.

Estimability In Biased Estimation

The traditional understanding of estimability in the linear model is that K0 is


defined as estimable if some linear function of y exists that has expectation K0 , and
6

thus this linear function is an unbiased estimator. But if we relax the requirement of
unbiasedness, is the above an appropriate definition of estimability? Is any function of
now estimable? It seems reasonable to me to restrict estimation to functions that could
be estimated if we had no missing subclasses. Otherwise we could estimate elements of
that have no relevance to the experiment in question. For example, treatments involve
levels of protein in the ration. Just because we invoke biased estimation of treatments
would hardly seem to warrant estimation of some treatment that has nothing to do with
level of protein. Consequently we state these rules for functions that can be estimated
biasedly.
0

1. We want to estimate K1 1 + K2 2 , where a prior on 2 is used.


0

2. If K1 1 +K2 2 were estimable with no missing subclasses, this function is a candidate


for estimation.
0

3. K1 1 must be estimable under a model in which E(y) = X1 1 .


0

4. K1 1 + K2 2 does not need to be estimable in the sample, but must be estimable in


the filled subclass case.
0

Then K1 o1 + K2 o2 is invariant to the solution to (7),(8), or (9). Let us illustrate with a


model
yij = + ti + eij , i = 1, 2, 3.
Suppose that the numbers of observations per treatment are (5, 3, 0). However, we are
willing to assume prior values for squares and products of t1 , t2 , t3 even though we have
no data on t3 . The following functions would be estimable if n3 > 0,

1 1 0 0

1 0 1 0

1 0 0 1
0

t1
t2
t3
0

Further with 1 being just , and K1 being 1, and X1 = (1 1 1), K1 1 is estimable under
a model E(yij ) = .
0

Suppose in contrast that we want to impose a prior on just t3 . Then 1 = ( t1 t2 )


and 2 = t3 . Now

1 1 0

0
0

K1 1 = 1 0 1 t1 .
1 0 0
t2
But the third row represents a non-estimable function. That is, is not estimable under
0
the model with 1 = ( t1 t2 ). Consequently + t3 should not be estimated in this way.

As another example suppose we have a 2 3 fixed model with n23 = 0 and all other
nij > 0. We want to estimate all six ij = + ai + bj + ij . With no missing subclasses
these are estimable, so they are candidates for estimation. Suppose we use priors on .
Then

(K1 K2 )

1
2

1
1
1
1
1
1

1
1
1
0
0
0

0
0
0
1
1
1

1
0
0
1
0
0

0
1
0
0
1
0

0
0
1
0
0
1

1
0
0
0
0
0

0
1
0
0
0
0

0
0
1
0
0
0

0
0
0
1
0
0

0
0
0
0
1
0

0
0
0
0
0
1

a1
a2
b1
b2
b3

Now K1 1 is estimable under a model, E(yijk ) = + ai + bj . Consequently we can by


our rules estimate all six ij . These will have expectations as follows.
E(
ij ) = + ai + bj + some function of 6= + ai + bj + ij .
Now suppose we wish to estimate by using a prior only on 23 . Then the last row of
K1 is + a2 + b3 but this is not estimable under a model
0

y11
y12
y13
y21
y22
y23

+ a1 + b1 + 11
+ a1 + b2 + 12
+ a1 + b3 + 13
+ a2 + b1 + 21
+ a2 + b2 + 22
+ a2 + b 3

Consequently we should not use a prior on just 23 .

Tests Of Hypotheses

Exact tests of hypotheses do not exist when biased estimation is used, but one might
wish to use the following approximate tests that are based on using mean squared error
of K0 o rather than V ar(K0 o ).

7.1

V ar(e) = Ie2

When V ar(e) = Ie2 write (7) as (32) or (8) as (33). Using the notation of Chapter
6, G = G e2 and P = P e2 .

X1 X 1
X1 X 2
0

X0 X2 + I
P X2 X1 P
2
Z0 X1
Z0 X2

X1 Z
1
0


X Z
2 =
P
2
u
Z0 Z + G1

X1 y
X0 y
.
P
2
0
Zy

(32)

The corresponding equations with symmetric coefficient matrix are in (33).


0
0
0

X1 Z
X 1 X2 P
X 1 X1
1
0


X 0 X2 P
+ P
P
X0 Z
=
P X 2 X1 P
2
2

u
Z0 X1
Z 0 X2 P
Z0 Z + G1

X1 y
X0 y

P
2
0
Zy

(33)

.
Then 2 = P
2
Let a g-inverse of the matrix of (32) post-multiplied by

I 0 0

0 P 0 Q
0 0 I
or a g-inverse of the matrix (33) pre-multipled and post-multiplied by Q be denoted by
C11 C12
C21 C22

= P , mean squared error


where C11 has order p p and C22 has order q q. Then if P
0
0
2
of K is K C11 Ke . Then
c)/s
(K0 c)0 [K0 C11 K]1 (K0
e2
is distributed under the null hypothesis approximately as F with s, t degrees of freedom,
where s = number of rows (linearly independent) in K0 , and
e2 is estimated unbiasedly
with t degrees of freedom.

7.2

V ar(e) = R
Let g-inverse of (7) post-multiplied by

I 0 0

0 P 0 Q
0 0 I
or a g-inverse of (8) pre-multiplied and post-multiplied by Q be denoted by
C11 C12
C21 C22

= R, G
= G, and P
= P, K0 C11 K is the mean squared error of K0 , and
Then if R

(K0 c)0 (K0 C11 K)1 (K0 c) is distributed approximately as 2 with s degrees of
freedom under the null hypothesis, K0 = c.
9

Estimation of P

If one is to use biased estimation and prediction, one would usually have to estimate
P, ordinarily a singular matrix. If the elements of 2 are thought to have no particular
pattern, permutation theory might be used to derive average values of squares and products of elements of 2 , that is the value of P. We might then formulate this as estimation
of a variance covariance matrix, usually with fewer parameters than t(t + 1)/2, where t is
the order of P. I think I would estimate these parameters by the MIVQUE method for
singular G described in Section 9 of Chapter 11 or by REML of Chapter 12.

Illustration
We illustrate biased estimation by a 3-way mixed model. The model is
yhijk = rh + ci + hi + uj + eijk ,
r, c, are fixed, V ar(u) = I/10, V ar(e) = 2I.

The data are as follows:


Levels of j
hi subclasses 1 2 3
11
2 1 0
12
0 1 1
13
1 0 0
21
1 2 1
22
0 0 1
y..j.
25 27 21

yhi..
18
13
7
26
9

We want to estimate using prior values of the squares and products of hi . Suppose
this is as follows, ordering i within h, and including 23 .
.1 .05 .05 .1 .05
.05

.1 .05 .05 .1 .05

.1
.05
.05
.1

.1 .05 .05

.1 .05
.1

The equations of the form


0

X1 R1 X1 X1 R1 X2 X1 R1 Z
1
X1 R1 y
0
0
0
0

X2 R1 X1 X2 R1 X2 X2 R1 Z 2 = X2 R1 y
u
Z0 R1 X1 Z0 R1 X2 Z0 R1 Z
Z0 R1 y

10

are presented in (34).


6 0 3 2

5 4 1

7 0

1
2

1
0
0
0
1

3
0
3
0
0
3

2
0
0
2
0
0
2

1
0
0
0
1
0
0
1

0
4
4
0
0
0
0
0
4

0
1
0
1
0
0
0
0
0
1

0
0
0
0
0
0
0
0
0
0
0

3
1
3
0
1
2
0
1
1
0
0
4

2
2
3
1
0
1
1
0
2
0
0
0
4

1
2
1
2
0
0
1
0
1
1
0
0
0
3

r
c

38
35
44
22
7
18
13
7
26
9
0
25
27
21

1
2

(34)

Note that 23 is included even though no observation on it exists.


Pre-multiplying these equations by

I 0 0

0 P 0 T
0 0 I
and adding I to the diagonals of equations (6)-(11) and 10I to the diagonals of equations
(12)-(14) we obtain the coefficient matrix to solve for the biased estimators and predictors.
The right hand side vector is
(19, 17.5, 22, 11, 3.5, .675, .225, .45, .675, .225, .45, 12.5, 13.5, 10.5)0 .
This gives a solution of
r
c

=
=
=
=

(3.6899, 4.8607),
(1.9328, 3.3010, 3.3168),
(.11406, .11406, 0, .11406, .11406, 0),
(.00664, .04282, .03618).

Note that
X
Xi

ij = 0 for i = 1, 2, and

j ij

= 0 for j = 1, 2, 3.

11

These are the same relationships that were defined for .


Post-multiplying the g-inverse of the coefficient matrix by T we get (35) . . . (38) and
the matrix for computing mean squared errors for M0 (r , c , , u ). The lower 9 9
submatrix is symmetric and invariant reflecting the fact that , and u are invariant to
the g-inverse taken.
Upper left 7 7

.26181 .10042
.02331 0
.15599 .02368
.00368

.05313
.58747 .22911 0
.54493
.07756
.00244

.05783 .26296
.41930 0 .35232 .02259 .00741

.56640
.61368 .64753 0 1.02228
.00080 .03080

.29989
.13633
.02243 0
2.07553
.07567
.04433

.02288
.07836 .02339 0
.07488
.08341 .03341

.02712 .02836
.02339 0
.07512 .03341
.08341

(35)

Upper right 7 7

.02
.08
.03
.03
.12
.05
.05

.02368
.07756
.02259
.00080
.07567
.08341
.03341

.00368 .02 .03780 .01750 .00469

.00244
.08 .01180 .01276 .03544

.00741 .03 .01986 .02631


.00617

.03080 .03
.02588 .01608 .04980

.04433
.12 .05563
.01213
.00350

.03341
.05 .00199
.00317 .00118

.08341
.05
.00199 .00317
.00118

(36)

Lower left 7 7

.05
.05
0 0
.15
.05
.05

.02288 .07836
.02339 0 .07488 .08341
.03341

.02712
.02836 .02339 0 .07512
.03341 .08341

.05
.05
0 0
.15
.05
.05

.01192
.01408 .04574 0 .08l51 .00199
.00199

.03359 .02884 .01023 0


.02821
.00317 .00317

.05450 .08524
.05597 0
.05330 .00118
.00118

(37)

Lower right 7 7

.10

.05
.05 .10
0
0
0

.08341 .03341 .05


.00199 .00317
.00118

.08341 .05 .00199


.00317 .00118

.10
0
0
0

.09343
.00537
.00120

.09008
.00455

.09425
12

(38)

A g-inverse of the coefficient matrix of equations like (8) is in (39) . . . (41).


This gives a solution (1.17081, 0, 6.79345, 8.16174, 8.17745, 0, 0, .76038, 1.52076, 0, 0,
.00664, .04282, .03618). Premultiplying this solution by T we obtain for 1 , (-1.17081,
0, 6.79345, 8.16174, 8.17745), and the same solution as before for 2 and u ; 1 is not
estimable so 1 is not invariant and differs from the previous solution. But estimable
functions of 1 are the same.
Pre and post-multiplying (39) . . . (41) by T gives the matrix (42) . . . (43). The lower
9 9 submatrix is the same as that of (38) associated with the fact that 2 and u are
unique to whatever g-inverse is obtained.
Upper left 7 7

1.00283 0 .43546 .68788 1.07683 0 0

0
0
0
0 0 0

.51469
.32450
.51712 0 0

1.20115
.72380 0 0

3.34426
0
0

0 0

(39)

Upper right 7 7 and (lower left 7 7)

.65838
.68324 0 0
.026 .00474
.03075

0
0 0 0
0
0
0

.30020 .39960 0 0 .03166 .03907 .02927

.14427 .71147 0 0
.01408 .02884 .08524

1.64509 .70981 0 0 .06743 .00063 .03l94

0
0 0 0
0
0
0

0
0 0 0
0
0
0

(40)

Lower right 7 7

12.59603 5.19206 0 0 .01329


.02112 .00784

10.38413 0 0
.02657 .04224
.01567

0 0
0
0
0

0
0
0
0

.09343
.00537
.00120

.09008
.00455

.09425

13

(41)

Upper left 7 7

1.00283 0 .43546 .68788 1.07683 .10124


.00124

0
0
0
0
0
0

.51469
.32450
.51712
.05497 .00497

1.20115
.72380
.07836 .02836

3.34426
.15324
.04676

.08341 .03341

.08341

(42)

Upper right 7 7 and (lower left 7 7)

.1
0
.05
.05
.2
.05
.05

.10124 .00124 .1
.026 .00474
.03075

0
0
0
0
0
0

.05497
.00497 .05 .03166 .03907 .02927

.07836
.02836 .05
.01408 .02884 .08524

.15324 .04676
.2 .06743 .00063 .03194

.08341
.03341 .05 .00199
.00317 .00118

.03341 .08341 .05


.00199 .00317
.00118

(43)

Lower right 7 7 is the same as in (38).


0

Suppose we wish to estimate K0 ( 1 2 )0 , which is estimable when the r c subclasses


are all filled, and

K0 =

6
0
3
3
3

0
6
3
3
3

2
2
6
0
0

2
2
0
6
0

2
2
0
0
6

2
0
3
0
0

2
0
0
3
0

2
0
0
0
3

0
2
3
0
0

0
2
0
3
0

0
2
0
0
3

/6.

Pre-multiplying the upper 11x11 submatrix of either (35) to (38) or (42) to (43) by K0
gives identical results shown in (44).

.44615 .17671 .15136 .19665 .58628

.91010 .08541 .38312 1.16170

.32993 .01354 .01168

.76397 .09215

2.51814

(44)

This represents the estimated mean squared error matrix of these 5 functions of .
Next we illustrate with another set of data the relationships of (3), (4), and (5) to
(7). We have a design with 3 treatments and 2 random sires. The subclass numbers are
14

Treatments
1
2
3

Sires
1 2
2 1
1 2
2 0

The model is
yijk = + ti + sj + xijk + eijk .
where is a regression and xijk the associated covariate.
y0 = (5 3 6 4 7 5 4 8),
Covariates = (1 2 1 3 2 4 2 3).
The data are ordered sires in treatments. We shall use a prior on treatments of

2 1 1

2 1

.
2
V ar(e) = 5I, and V ar(s) = I.
We first illustrate the equations of (8),
0

X1 R1 X1 X1 R1 X2 X1 R1 Z
0
0
0

X2 R1 X1 X2 R1 X2 X2 R1 Z =
Z0 R1 X1
Z0 R1 X2 Z0 R1 Z

1.6 3.6 .6 .6 .4 1.0 .6

9.6 .8 1.8 1.0 2.2 1.4

.6
0
0 .4 .2

.6
0 .2 .4

.4 .4
0

1.0
0

.6

(45)

and
0

X1 R1 y
0

0
X2 R1 y = (8.4 19.0 2.8 3.2 2.4 4.8 3.6) .
Z0 R1 y

These are ordered, , , t, s. Premultiplying (45) and (46) by

1 0 0
0
0 0

1 0
0
0 0

2
1
1
0

2 1 0

2 0

15

0
0
0
0
0
0
1

(46)

we get

1.6
3.6
.6
.6
.4 1.0
.6

3.6
9.6
.8 1.8 1.0 2.2 1.4

.2 1.2 1.2 .6 .4
.2
0

.2
1.8 .6 1.2 .4 .4
.6
,

.4 .6 .6 .6
.8
.2 .6

1.0
2.2
.4
.2
.4 1.0
0

.6
1.4
.2
.4
0
0
.6

(47)

(8.4 19.0 0 1.2 1.2 4.8 3.6)0 .

(48)

and
The vector (48) is the right hand side of equations like (8). Then the coefficient matrix
is matrix (47) + dg(0 0 1 1 1 1 1). The solution is

(t )0
(s )0

=
=
=
=

5.75832,
.16357,
(.49697 .02234 .51931),
(.30146 .30146).

Now we set up equations (3).

6 1 0 1

6 0 1

6 0

V = (ZGZ0 + R) =

0
0
1
0
6

0
0
1
0
1
6

1
1
0
1
0
0
6

1
1
0
1
0
0
1
6

(49)

2 2 2 1 1 1 1 1

2 2 1 1 1 1 1

2 1 1 1 1 1

0
0
2
2
2 1 1

X2 2 2 X2 =
.

2
2
1
1

2 1 1

2
2

2
0

(V + X2 2 2 X2 )1 =

16

(50)

.1525 .0475 .0275 .0090


.0111
.0111 .0005 .0005

.1525 .0275 .0090


.0111
.0111 .0005 .0005

.1444
.0214 .0067 .0067
.0119
.0119

.1417 .0280 .0280 .0031 .0031

.1542
.0458
.0093
.0093

.1542
.0093
.0093

.1482 .0518

.1482

(51)

The equations like (4) are


.857878 1.939435
1.939435 5.328690

4.622698
10.296260

(52)

The solution is (5.75832 - .163572) as in the mixed model equations.

(y

X1 1 )

1
1
1
1
1
1
1
1

1
2
1
3
2
4
2
3

5.75832
.163572

.59474
2.43117
.40526
1.26760
1.56883
.10403
1.43117
2.73240

2 2 X2 (V + X2 2 2 X2 )1 =

.1426
.1426
.1471 .0725 .0681 .0681 .0899 .0899

.1742
.1270
.1270 .0766 .0766
.0501 .0501 .0973
.
.0925 .0925 .0497 .1017 .0589 .0589
.1665
.1665
Then t = (-.49697 -.02234 .51931)0 as before.
0

GZ0 (V + X2 2 2 X2 )1 =
.0949
.0949 .0097
.1174 .0127 .0127 .0923 .0923
.0053 .0053
.1309 .0345 .1017 .1017 .0304 .0304

Then u = (-.30146 .30146)0 as before.


Sections 9 and 10 of Chapter 15 give details concerning use of a diagonal matrix in
place of P.

10

Relationships Among Methods

BLUP, Bayesian estimation, and minimum mean squared error estimation are quite
similar, and in fact are identical under certain assumptions.
17

10.1

Bayesian estimation

Let (X Z) = W and ( 0 u0 ) = 0 . Then the linear model is


y = W + e.
Let e have multivariate normal distribution with null means and V ar(e) = R. Let
the prior distribution of be multivariate normal with E() = , V ar() = C, and
Cov(, e0 ) = 0. Then for any of the common loss functions, that is, squared loss function,
absolute loss function, or uniform loss function the Bayesian estimator of is the solution
to (53).
= W0 R1 y + C1 .
(W0 R1 W + C1 )
(53)
is an unbiased estimator of if estimable and E() = . See Lindley and
Note that
Smith (1972) for a discussion of Bayesian estimation for linear models. Equation (53)
can be derived by maximizing f (y, ) for variations in . This might be called a MAP
(maximum a posteriori) estimator, Melsa and Cohn (1978).
Now suppose that
1

0 0
0 G1

and prior on = 0. Then (53) becomes the mixed model equations for BLUE and BLUP.

10.2

Minimum mean squared error estimation

Using the same notation as in Section 10.1, the minimum mean squared error estimator is
(W0 R1 W + Q1 ) o = W0 R1 y,
(54)
where Q = C + 0 . Note that if = 0 this and the Bayesian estimator are identical.
The essential difference is that the Bayesian estimator uses prior E(), whereas minimum
MSE uses only squares and products of .
To convert (54) to the situation with prior on 2 but not on 1 , let

0 0
0

1
0
= 0 P
.
0 0
G1

The upper left partition is square with order equal to the number of elements in 1 .
To convert (54) to the BLUP, mixed model equations let
Q =

0 0
0 G1
18

where the upper left submatix is square with order p, the number of elements in . In
the above results P may be singular. In that case use the technique described in previous
sections for singular G and P.

10.3

Invariance property of Bayesian estimator

Under normality and with absolute deviation as the loss function, the Bayesian esti ), where ( o , u
) is the Bayesian solution (also the BLUP
mator of f(, u) is f ( o , u
solution when the priors are on u only), and f is any function. This was noted by Gianola
(1982) who made use of a result reported by DeGroot (1981). Thus under normality any
function of the BLUP solution is the Bayesian estimator of that function when the loss
function is absolute deviation.

10.4

Maximum likelihood estimation

If the prior distribution on the parameters to be estimated is the uniform distribution


and the mode of the posterior distribution is to be maximized, the resulting estimator is
ML. When Zu + e =  has the multivariate normal distribution the MLE of , assumed
estimable, is the maximizing value of k exp[.5 (y X)0 V1 (y X)]. The maximizing
value of this is the solution to
= X0 V1 y,
X0 V1 X
the GLS equations. Now we know that the conditional mean of u given y is
GZ0 V1 (y X).
Under fairly general conditions the ML estimator of a function of parameters is that
same function of the ML estimators of those same parameters. Thus ML of the conditional
mean of u under normality is
GZ0 V1 (y X o ),
which we recognize as BLUP of u for any distribution.

11

Pattern Of Values Of P

When P has the structure described above and consequently is singular, a simpler
method can be used. A diagonal, non-singular P can be written, which when used in
mixed model equations results in the same estimates and predictions of estimable and
predictable functions. See Chapter 15.
19

Chapter 10
Quadratic Estimation of Variances
C. R. Henderson
1984 - Guelph

Estimation of G and R is a crucial part of estimation and tests of significance of


estimable functions of and of prediction of u. Estimators and predictors with known
desirable properties exist when G and R are known, but realistically that is never the case.
Consequently we need to have good estimates of them if we are to obtain estimators and
predictors that approach BLUE and BLUP. This chapter is concerned with a particular
class of estimators namely translation invariant, unbiased, quadratic estimators. First a
model will be described that appears to include all linear models proposed for animal
breeding problems.

A General Model For Variances And Covariances


The model with which we have been concerned is
y = X + Zu + e.
V ar(u) = G, V ar(e) = R, Cov(u, e0 ) = 0.

The dimensions of vectors and matrices are


y : n 1, X : n p, : p 1, Z : n q, u : q 1, e : n 1, G : q q, and R : n n.
Now we characterize u and e in more detail. Let
Zu =

Xb
i=1

Zi ui .

(1)

Zi has dimension n qi , and ui is qi 1.


Xb

q
i=1 i

= q,

V ar(ui ) = Gii gii .


0
Cov(ui , uj ) = Gij gij .

(2)
(3)

gii represents a variance and gij a covariance. Let


0

e0 = (e1 e2 . . . ec ).
V ar(ei ) = Rii rii .
0
Cov(ei , ej ) = Rij rij .
1

(4)
(5)

rii and rij represent variances and covariances respectively. With this model V ar(y) is
V =

Xb
i=1

V ar(u) = G =

Xb

G11 g11
0
G12 g12
..
.

Zi Gij Zj gij + R,

j=1

G12 g12
G22 g22
..
.

(6)

G1b g1b
G2b g2b
..
.

(7)

G1b g1b G2b g22 Gbb gbb


and

V ar(e) = R =

R11 r11
0
R12 r12
..
.

R12 r12
R22 r22
..
.

R1c r1c
R2c r2c
..
.

(8)

R1c r1c R2c r2c Rcc rcc


We illustrate this general model with two different specific models, first a traditional
mixed model for variance components estimation, and second a two trait model with
missing data. Suppose we have a random sire by fixed treatment model with interaction.
The numbers of observations per subclass are

Treatment 1
1
2
2
1

Sires
2 3
1 2
3 0

Let the scalar model be


yijk = + ti + sj + (ts)ij + eijk .
The sj have common variance, s2 , and are uncorrelated. The (ts)ij have common variance,
2
st
, and are uncorrelated. The sj and (ts)ij are uncorrelated. The eijk have common
variance, e2 , and are uncorrelated. The corresponding vector model, for b = 2, is
y = X + Z1 u1 + Z2 u2 + e.

X =

1
1
1
1
1
1
1
1
1

1
1
1
1
1
0
0
0
0

0
0
0
0
0
1
1
1
1

t1 ,

t2

Z1 u1 =

1
1
0
0
0
1
0
0
0

0
0
1
0
0
0
1
1
1

0
0
0
1
1
0
0
0
0

s1
s2
,
s3

Z2 u2 =

1
1
0
0
0
0
0
0
0

0
0
1
0
0
0
0
0
0

0
0
0
1
1
0
0
0
0

0
0
0
0
0
1
0
0
0

0
0
0
0
0
0
1
1
1

ts11
ts12
ts13
ts21
ts22

and
2
.
G11 g11 = I3 s2 , G22 g22 = I5 ts

G12 g12 does not exist, c = 1, and R11 r11 = I9 e2 .


For a two trait model suppose that we have the following data on progeny of two
related sires
Trait
Sire Progeny 1 2
1
1
X X
1
2
X X
1
3
X 0
2
4
X X
2
5
X 0
X represents a record and 0 represents a missing record.
genetic sire model. Order the records by columns, that is
u1 , u2 represent sire values for traits l and 2 respectively.
divided by 2. Let e1 , e2 represent errors for traits 1 and 2
of sire 1, both non-inbred.

Let us assume an additive


animals within traits. Let
These are breeding values
respectively. Sire 2 is a son

n = 8, q1 = 2, q2 = 2.

0
0
0
1
1
0
0
0

1 1/2
1/2 1

Z1 u1 =

G11 g11 =

1
1
1
0
0
0
0
0

u1 , Z2 u2 =

g11
,

G12 g12 =

0
0
0
0
0
1
1
0

0
0
0
0
0
0
0
1

u2 ,

1 1/2
1/2 1

g12
,

1 1/2
1/2 1

G22 g22 =
where

g11
g12

g12 g22

g22
,

is the additive genetic variance-covariance matrix divided by 4. Also,

R11 r11 = I5 r11


, R22 r22 = I3 r22
, R12 r12 =

where

r11
r12

r12
r22

1
0
0
0
0

0
1
0
0
0

0
0
0
1
0

r12
,

).
+ r11
/(g11
is the error variance-covariance matrix for the 2 traits. Then h21 = 4 g11
1/2

Genetic correlation between traits 1 and 2 is g12 /(g11 g22 ) .

Another method for writing G and R is the following


G = G11 g11 + G12 g12 + ... + Gbb gbb ,

(9)

where
G11

G11 0
0
0

G12

0
G12 0
0

= G12 0
0 , ..., Gbb =
0
0
0

0 0
0 Gbb

Every Gij has order, q, and


R = R11 r11 + R12 r12 + ... + Rcc rcc ,
where
R11 =

R11 0
0
0

, R12

(10)

0
R12 0
0

=
0 , etc.
R12 0
0
0
0

and every Rij has order, n.

Quadratic Estimators

Many methods commonly used for estimation of variances and covariances are quadratic,
unbiased, and translation invariant. They include among others, ANOVA estimators for
4

balanced designs, unweighted means and weighted squares of means estimators for filled
subclass designs, Hendersons methods 1, 2 and 3 for unequal numbers, MIVQUE, and
MINQUE. Searle (1968, 1971a) describes in detail some of these methods.
A quadratic estimator is defined as y0 Qy where for convenience Q can be specified
as a symmetric matrix. If we derive a quadratic with a non-symmetric matrix, say P, we
can convert this to a quadratic with a symmetric matrix by the following identity.

where

y0 Qy = (y0 Py + y0 P0 y)/2
Q = (P + P0 )/2.

A translation invariant quadratic estimator satisfies


y0 Qy = (y + Xk)0 Q(y + Xk) for any vector, k.
y0 Qy = y0 Qy + 2y0 QXk + k0 X0 QXk.
From this it is apparent that for equality it is required that
QX = 0.

(11)

For unbiasedness we examine the expectation of y0 Qy intended to estimate, say ggh .


E(y0 Qy) = 0 X0 QX +

b X
b
X

tr(QZi Gij Zj )gij

i=1 j=1

c
c X
X

tr(QRij )rij .

i=1 j=i

We require that the expectation equals ggh . Now if the estimator is translation invariant,
the first term in the expectation is 0 because QX = 0. Further requirements are that
tr(QZGij Z0 ) = 1 if i = g and j = h
= 0, otherwise and

tr(QRij ) = 0 for all i, j.

Variances Of Estimators

Searle(1958) showed that the variance of a quadratic estimator y0 Qy, that is unbiased
and translation invariant is
2 tr(QVQV),
(12)
and the covariance between two estimators y0 Q1 y and y0 Q2 y is
2 tr(Q1 VQ2 V)
5

(13)

where y is multivariate normal, and V is defined in (6). Then it is seen that (12) and
(13) are quadratics in the gij and rij , the unknown parameters that are estimated. Consequently the results are in terms of these parameters, or they can be evaluated numerically
for assumed values of g and r. In the latter case it is well to evaluate V numerically for
assumed g and r and then to proceed with the methods of (12) and (13).

Solutions Not In The Parameter Space

Unbiased estimators of variances and covariances with only one exception have positive probabilities of solutions not in the parameter space. The one exception is estimation
of error variance from least squares or mixed model residuals. Otherwise estimates of
variances can be negative, and functions of estimates of covariances and variances can
result in estimated correlations outside the permitted range -1 to 1. In Chapter 12 the
condition required for an estimated variance-covariance matrix to be in the parameter
space is that there be no negative eigenvalues.
An inevitable price to pay for quadratic unbiasedness is non-zero probability that
the estimated variance-covariance matrix will not fall in the parameter space. All such
estimates are obtained by solving a set of linear equations obtained by equating a set
of quadratics to their expectations. We could, if we knew how, impose side conditions
on these equations that would force the solution into the parameter space. Having done
this the solution would no longer yield unbiased estimators. What should be done in
practice? It is sometimes suggested that we estimate unbiasedly, report all such results
and then ultimately we can combine these into a better set of estimates that do fall in the
and R

parameter space. On the other hand, if the purpose of estimation is to provide G


for immediate use in mixed model estimation and prediction, it would be very foolish to
use estimates not in the parameter space. For example, suppose that in a sire evaluation
situation we estimate e2 /s2 to be negative and use this in mixed model equations. This
would result in predicting a sire with a small number of progeny to be more different from
zero than the adjusted progeny mean if
e2 /
s2 is less than the corresponding diagonal
element of the sire. If the absolute value of this ratio is greater than the diagonal element,
the sign of si is reversed as compared to the adjusted progeny mean. These consequences
are of course contrary to selection index and BLUP principles.
Another problem in estimation should be recognized. The fact that estimated variancecovariance matrices fall in the parameter space does not necessarily imply that functions
of these have that same property. For example, in an additive genetic sire model it is
often assumed that 4
s2 /(
s2 +
e2 ) is an estimate of h2 . But it is entirely possible that this
e2 are both greater than 0. Of
computed function is greater than one even when
s2 and
course if
s2 < 0 and
e2 > 0, the estimate of h2 would be negative. Side conditions to
2
2
solution of
s and
e that will insure that
s2 ,
e2 , and h2 (computed as above) fall in the

parameter space are

s2 > 0,
e2 > 0, and
s2 /
e2 < 1/3.
Another point that should be made is that even though
s2 and
e2 are unbiased,
s2 /
e2 is
2
2
2
2
2
2
a biased estimator of s /e , and 4
s /(
s +
e ) is a biased estimator of h .

Form Of Quadratics

Except for MIVQUE and MINQUE most quadratic estimators in models with all
gij = 0 for i 6= j and with R = Ie2 can be expressed as linear functions of y0 y and of
reductions in sums of squares that will now be defined.
Let OLS equations in , u be written as
W0 Wo = W0 y

(14)

where W = (X Z) and
o
uo

Then reduction under the full model is


(o )0 W0 y

(15)

Partition with possible re-ordering of columns


W = (W1 W2 )

(16)

and correspondingly
1
2

1 should always contain and from 0 to b 1 of the ui . Solve for 1 in


0

W1 W1 1 = W1 y.

(17)

Then reduction under the reduced model is


0

(1 )0 W1 y.

(18)

Expectations of Quadratics
Let us derive the expectations of these ANOVA type quadratics.
E(y0 y) = tr V ar(y) + 0 X0 X
=

b
X

tr(Zi Gii Zi )gii + ne2 + 0 X0 X.

i=1

(19)
(20)

In traditional variance components models every Gii = I. Then


E(y0 y) =

b
X

n gii + n e2 + 0 X0 X.

(21)

i=1

It can be seen that (15) and (18) are both quadratics in W0 y. Consequently we use
V ar(W0 y) in deriving expectations. The random part of W0 y is
X

W0 Zi ui + W0 e.

(22)

The matrix of the quadratic in W0 y for the reduction under the full model is (W0 W) .
Therefore the expectation is
b
X

tr(W0 W) W0 Zi Gii Zl Wgii + rank (W)e2 + 0 X0 W(W0 W) W0 X.

(23)

i=1

When all Gii = I, (23) reduces to


b
X

n gii + r(W)e2 + 0 X0 X.

(24)

i=1

For the reduction due to 1 , the matrix of the quadratic in W0 y is


0

(W1 W1 ) 0
0
0

Then the expectation of the reduction is


h
X

tr(W1 W1 ) W1 Zi Gii Zi W1 gii + rank (W1 )e2 + 0 X0 W1 (W10 W1 ) W10 X. (25)

i=1

When all Gii = I, (25) and when X is included in W1 simplifies to


X
i

n gii +

tr(W1 W1 ) W1 Zj Zj W1 gjj + rank (W1 )e2 + 0 X0 X.

(26)

where i refers to ui included in 1 , and j refers to uj not included in 1 . If Zj is a linear


function of W1 , the coefficient of gjj is n also.

and e
Quadratics in u

MIVQUE computations can be formulated as we shall see in Chapter 11 as quadratics


and e
, BLUP of u and e when g = g
and r = r. The mixed model equations are
in u
1 X X0 R
1 Z
X0 R
1 X Z0 R
1 Z + G
1
Z0 R

1 y
X0 R
1 y
Z0 R

(27)

be u
0 Q
Let some quadratic in u
u. The expectation of this is
trQ V ar(
u).

(28)

To find V ar(
u), define a g-inverse of the coefficient matrix of (27) as
C00 C01
C10 C11 .

C0
C1

C.

(29)

1 y. See (16) for definition of W. Then


= C1 W 0 R
u
1 y)] C0 ,
V ar(
u) = C1 [V ar(W0 R
1

(30)

and
1 y) =
V ar(W0 R

Xb

Xb

i=1
Xc

j=1
Xc

i=1

1 Wgij
1 Zi Gij Z R
W0 R
j

(31)

1 R R
1 Wrij .
W0 R
ij

(32)

j=1

be e
0 Q
Let some quadratic in e
e. The expectation of this is
trQ V ar(
e).

(33)

= y X o Z
0 ] and W = (X Z), giving
But e
u = y Wo , where (o )0 = [( o )0 u
1 ]y.
= [I WCW0 R
e

(34)

1 ) [V ar(y)] (I WCW0 R
1 )0 ,
V ar(
e) = (I WCW0 R

(35)

Therefore,
and
V ar(y) =

Xb

Xb

i=1
Xc

i=1

j=1
Xc

Zi Gij Zj gij

(36)

Rij rij .

(37)

j=1

When
G
R
V ar(
u)
V ar(
e)

=
=
=
=

G,

R,
G C11 , and
R WCW0 .

(38) and (39) are used for REML and ML methods to be described in Chapter 12.

(38)
(39)

Hendersons Method 1

We shall now present several methods that have been used extensively for estimation
of variances (and in some cases with modifications for covariances). These are modelled
after balanced ANOVA methods of estimation. The model for these methods is usually
y = X +

Xb
i=1

Zi ui + e,

(40)

where V ar(ui ) = Ii2 , Cov(ui , uj ) = 0 for all i 6= j, and V ar(e) = Ie2 . However, it is
relatively easy to modify these methods to deal with
V ar(ui ) = Gii i2 .
For example, Gii might be A, the numerator relationship matrix.
Method 1, Henderson(1953), requires for unbiased estimation that X0 = [1...1]. The
model is usually called a random model. The following reductions in sums of squares are
computed
0

y0 Zi (Zi Zi )1 Zi y(i = 1, . . . , b),


(10 yy0 1)/n,

(41)
(42)

y0 y.

(43)

and

The first b of these are simply uncorrected sums of squares for the various factors
and interactions. The next one is the correction factor, and the last is the uncorrected
sum of squares of the individual observations.
Then these b + 2 quadratics are equated to their expectations. The quadratics of (41)
0
are easy to compute and their expectations are simple because Zi Zi is always diagonal.
Advantage should therefore be taken of this fact. Also one should utilize the fact that the
coefficient of i2 is n, as is the coefficient of any j2 for which Zj is linearly dependent upon
Zi . That is Zj = Zi K. For example the reduction due to sires herds has coefficient
2
n for sh
, s2 , h2 in a model with random sires and herds. The coefficient of e2 in the
0
expectation is the rank of Zi Zi , which is the number of elements in ui .
Because Method 1 is so easy, it is often tempting to use it on a model in which
X 6= (1...1), but to pretend that one or more fixed factors is random. This leads to
biased estimators, but the bias can be evaluated in terms of unknown 0 . In balanced
designs no bias results from using this method.
0

We illustrate Method 1 with a treatment sire design in which treatments are


regarded as random. The data are arranged as follows.
10

Number of Observations
Sires
Treatment 1 2 3 4 Sums
1
8 3 2 5
18
2
7 4 1 0
12
3
6 2 0 1
9
Sums
21 9 3 6
39

Sums of Observations
Sires
Treatment 1
2 3 4 Sums
1
54 21 13 25 113
2
55 33 8 0
96
3
44 17 0 9
70
Sums
153 71 21 34 279

y0 y = 2049.
The ordinary least squares equations for these data are useful for envisioning Method
1 as well as some others. The coefficient matrix is in (44). The right hand side vector is
(279, 113, 96, 70, 153, 71, 21, 34, 54, 21, 13, 25, 55, 33, 8, 44, 17, 9)0 .

39 18 12 9 21 9 3 6

18 0 0 8 3 2 5

12 0 7 4 1 0

9 6 2 0 1

21 0 0 0

9 0 0

3 0

Red (ts) =
Red (t) =
Red (s) =
C.F. =
E[Red (ts)] =

8
8
0
0
8
0
0
0
8

3
3
0
0
0
3
0
0
0
3

2
2
0
0
0
0
2
0
0
0
2

5
5
0
0
0
0
0
5
0
0
0
5

7
0
7
0
7
0
0
0
0
0
0
0
7

4
0
4
0
0
4
0
0
0
0
0
0
0
4

1
0
1
0
0
0
1
0
0
0
0
0
0
0
1

6
0
0
6
6
0
0
0
0
0
0
0
0
0
0
6

2
0
0
2
0
2
0
0
0
0
0
0
0
0
0
0
2

92
542 212
+
... +
= 2037.56.
8
3
1
1132 962 702
+
+
= 2021.83.
18
12
9
1532
342
+ ... +
= 2014.49.
21
6
2792 /39 = 1995.92.
2
10s2 + 39(s2 + t2 + ts
) + 39 2 .

11

1
0
0
1
0
0
0
1
0
0
0
0
0
0
0
0
0
1

(44)

For the expectations of other reductions as well as for the expectations of quadrat0
ics used in other methods including MIVQUE we need certain elements of W0 Z1 Z1 W,
0
0
W0 Z2 Z2 W, and W0 Z3 Z3 W, where Z1 , Z2 , Z3 refer to incidence matrices for t, s, and ts,
0
respectively, and W = [1 Z]. The coefficients of W0 Z1 Z1 W are in (45), (46), and (47).
Upper left 9 9
549 324 144 81 282 120 48 99 144

324
0 0 144 54 36 90 144

144 0 84 48 12 0
0

81 54 18 0 9
0

149 64 23 46 64

29
10
17
24

5 10 16

26 40
64

(45)

Upper right 9 9 and (lower left 9 9)

54
54
0
0
24
9
6
15
24

36
36
0
0
16
6
4
10
16

90
90
0
0
40
15
10
25
40

84
0
84
0
49
28
7
0
0

48 12 54 18 9
0 0 0 0 0

48 12 0 0 0

0 0 54 18 9

28 7 36 12 6

16 4 12 4 2

4 1 0 0 0

0 0 6 2 1
0 0 0 0 0

(46)

Lower right 9 9
9 6 15

4 10

25

0 0
0 0
0 0
49 28
16

0
0
0
7
4
1

0 0
0 0
0 0
0 0
0 0
0 0
36 12
4

0
0
0
0
0
0
6
2
1

The coefficients of W0 Z3 Z3 W are in (48), (49), and (50).

12

(47)

Upper left 9 9
209 102 66 41 149 29 5 26 64

102 0 0 64 9 4 25 64

66
0
49
16
1
0
0

41 36 4 0 1 0

149 0 0 0 64

29 0 0 0

5 0 0

26 0
64

(48)

Upper right 9 9 and (lower left 9 9)

9
9
0
0
0
9
0
0
0

4 25 49 16 1 36 4 1
4 25 0 0 0 0 0 0

0 0 49 16 1 0 0 0

0 0 0 0 0 36 4 1

0 0 49 0 0 36 0 0

0 0 0 16 0 0 4 0

4 0 0 0 1 0 0 0

0 25 0 0 0 0 0 1
0 0 0 0 0 0 0 0

(49)

dg (9, 4, 25, 49, 16, 1, 36, 4, 1)

(50)

Lower right 9 9

The coefficients of W0 Z2 Z2 W are in (51), (52), and (53).


Upper left 9 9
567 231 186 150 441 81

102 70 59 168 27

66 50 147 36

41 126 18

441 0

81

13

9 36 168
6 30 64

3 0 56

0 6 48

0 0 168

0 0
0

9 0
0

36
0
64

(51)

Upper right 9 9 and (lower left 9 9)

27
9
12
6
0
27
0
0
0

6 30 147 36 3 126 18 6
4 25 56 12 2 48 6 5

2 0 49 16 1 42 8 0

0 5 42 8 0 36 4 1

0 0 147 0 0 126 0 0

0 0
0 36 0
0 18 0

6 0
0 0 3
0 0 0

0 30
0 0 0
0 0 6
0 0 56 0 0 48 0 0

(52)

Lower right 9 9
9 0

0
0
25

0 12 0 0
0 0 2 0
0 0 0 0
49 0 0 42
16 0 0
1 0
36

6
0
0
0
8
0
0
4

0
0
5
0
0
0
0
0
1

(53)

2
E[Red (t)] = 3e2 + 39t2 + k1 (s2 + ts
) + 39 2 .
102 66 41
k1 =
+
+
= 15.7222.
18
12
9

The numerators above are the 2nd, 3rd, and 4th diagonals of (48) and (51). The denominators are the corresponding diagonals of the least squares coefficient matrix of (44). Also
note that
102 =

n21j = 82 + 32 + 22 + 52 ,

j
2

66 = 7 + 42 + 12 ,
41 = 62 + 22 + 12 .
2
) + 39 2 .
E[Red (s)] = 4e2 + 39s2 + k2 (t2 + ts
149 29 5 26
k2 =
+
+ +
= 16.3175.
21
9
3
6
2
E( C.F.) = e2 + k3 ts
+ k4 t2 + k5 s2 + 39 2 .
209
549
567
k3 =
= 5.3590, k4 =
= 14.0769, k5 =
= 14.5385.
39
39
39
14

It turns out that

e2 = [y0 y Red (ts)]/(39 10)


= (2049 2037.56)/29 = .3945.

e2
R(ts)
R(s)
R(t)
CF

1
10
4
3
1

0
39
16.3175
15.7222
5.3590

0
39
39
15.7222
14.5385

e2
2

ts

s2

t2
39
2

0
39
16.3175
39
14.0769

0
1
1
1
1

e2
2
ts
s2
t2
392

1.
0
0
0
0
.3945

2037.56
.31433
.07302 .06979 .06675
.06352

.01361 .03006
.06979
.02379 .06352
2014.49

.04981 .02894
.02571
.06675 .06352 2021.83
.21453
.45306 1.00251 .92775 2.47720
1995.92

= [.3945, -.1088, .6660, 1.0216, 1972.05].


The 5 5 matrix just above is the inverse of the expectation matrix.
What if t is fixed but we estimate by Method 1 nevertheless? We can evaluate the
2
bias in
ts
and
s2 by noting that
2

ts
= y0 WQ1 W0 y .31433
e2

where Q1 is a matrix formed from these elements of the inverse just above, (.07302, .06979, -.06675, 06352) and the matrices of quadratics in right hand sides representing
Red (ts), Red (s), Red (t), C.F.
Q1 is dg [.0016, -.0037, -.0056, -.0074, -.0033, -0.0078, -.0233, -.0116, .0091, .0243,
.0365, .0146, .0104, .0183, .0730, .0122, .0365, .0730]. dg refers to the diagonal elements
2
of a matrix. Then the contribution of tt0 to the expectation of
ts
is
0

tr(Z1 WQ1 W0 Z1 ) (tt0 )


where Z1 is the incidence matrix for t and W = (1 Z).
15

This turns out to be

.0257

tr

.0261 .0004
0
.0004 .0257
tt ,
.0261

that is, .0257 t21 + 2(.0261) t1 t2 2(.0004) t1 t3 .0004 t22 2(.0257) t2 t3 + .0261 t22 .
This is the bias due to regarding t as random. Similarly the quadratic in right hand sides
for estimation of s2 is
dg [-.0016, .0013, .0020, .0026, .0033, .0078, .0233, .0116, -.0038, -.0100, -.0150, -.0060,
-.0043, -.0075, -.0301, -.0050, -.0150, -.0301].
The bias in
s2 is

.0257 .0261

.0004
tr

.0004
0
.0257
tt .
.0261

2
This is the negative of the bias in
ts
.

Hendersons Method 3

Method 3 of Henderson(1953) can be applied to any general mixed model for variance
components. Usually the model assumed is
y = X +

Zi ui + e.

(54)

V ar(ui ) = Ii2 , Cov(ui uj ) = 0, V ar(e) = Ii2 . In this method b + 1 different quadratics


of the following form are computed.
Red ( with from 0 to b included ui )

(55)

Then e2 is estimated usually by

e2 = [y0 y Red (, u1 , ..., ub )]/[n rank(W)]

(56)

where W = (X Z), and the solution to o , uo is OLS.


In some cases it is easier to compute e2 by expanding the model to include all possible
interactions. Then if there is no covariate,
e2 is the within smallest subclass mean
square. Then
e2 and the b + 1 reductions are equated to their expectations. Method 3
has the unfortunate property that there are often more than b + 1 reductions like (55)
possible. Consequently more than one Method 3 estimator exists, and in unbalanced
designs the estimates will not be invariant to the choice. One would like to select the
16

set that will give smallest sampling variance, but this is unknown. Consequently it is
tempting to select the easiest subset. This usually is
Red (, u1 ), Red (, u2 ), ..., Red (, ub ), Red (). For example: Red (, u2 ) is computed as follows. Solve
X0 X X0 Z2
0
0
Z2 X Z2 Z2

o
uo2

X0 y
Z2 0 y

Then reduction = ( o )0 X0 y + (uo2 )0 Z2 y. To find the expectation of a reduction let a


g-inverse of the coefficient matrix of the ith reduction, (Wi0 Wi ), be Ci . Then
E(ith reduction) = rank (Ci )e2 +

Xs

j=1

trCi Wi Zj Zj Wi j2 + 0 X0 X.

(57)

Wi = [X Zi for any included ui ], and Ci is the g-inverse. For example, in Red (, u1 , u3 ),


W = [X Z1 Z3 ].
Certain of the coefficients in (57) are n. These are all j2 included in the reduction
and also any k2 for which
Zk = Wj L.
A serious computational problem with Method 3 is that it may be impossible with
0
existing computers to find a g-inverse of some of the Wi Wi . Partitioned matrix methods
can sometimes be used to advantage. Partition
0

W1 W 1 W1 W 2
0
0
W2 W 1 W2 W 2

Wi W i =
and

W1 y
0
W2 y

Wi y =

It is advantageous to have W1 W1 be diagonal or at least of some


form that is easy to
!
1
invert. Define and included ui as and partition as
. Then the equations to
2
solve are
!
!
!
0
0
0
1
W1 y
W1 W 1 W1 W 2
.
=
0
0
0
2
W2 y
W2 W 1 W2 W 2
Absorb 1 by writing equations
0

W2 PW2 2 = W2 Py
0

(58)

where P = I W1 (W1 W1 ) W1 . Solve for 2 in (58). Then


0

reduction = y0 W1 (W1 W1 ) W1 y + 2 W2 Py.


17

(59)

To find the coefficient of j2 in the expectation of this reduction, define


0

(W2 PW2 ) = C.
The coefficient of j2 is
0

tr(W1 W1 ) W1 Zj Zj W1 + trCW2 PZj Zj PW2 .

(60)

Of course if uj is included in the reduction, the coefficient is n.


Let us illustrate Method 3 by the same example used in Method 1 except now we
2
, s2 , and we need 3 reductions, each
regard t as fixed. Consequently the i2 are ts
including , t. The only possible reductions are Red (,t,ts), Red (,t,s), and Red (,t).
Consequently in this special case Method 3 is unique. To find the first of these reductions
we can simply take the last 10 rows and columns of the least squares equations. That is,
dg [8, 3, 2, 5, 7, 4, 1, 6, 2, 1] st = [54, 21, 13, 25, 55, 33, 8, 44, 17, 9]0 . The resulting
reduction is 2037.56 with expectation,
2
10e2 + 39(ts
+ s2 ) + 0 X0 X.

For the reduction due to (, t, s) we can take the subset of OLS equations represented
by rows (and columns) 2-7 inclusive. This gives equations to solve as follows.

18

0 0
12 0
9

8
7
6
21

3
4
2
0
9

2
1
0
0
0
3

t1
t2
t3
s1
s2
s3

113
96
70
153
71
21

(61)

We can delete and s4 because the above is a full rank subset of the coefficient matrix
that includes and s4 . The inverse of the above matrix is
.1717 .1602 .1417 .1593 .1599 .1678

.3074 .1989 .2203 .2342 .2093

.2913 .2035 .2004 .1608

.2399
.1963
.1796

.3131
.1847
.5150

(62)

and this gives a solution vector [5.448, 6.802, 6.760, 1.011, 1.547, 1.100]. The reduction
is 2029.57. The coefficient of s2 in the expectation is 39 since s is included. To find the
2
coefficient of ts
define as T the submatrix of (51) formed by taking columns and rows
2
(2-7). Then the coefficient of ts
= trace [matrix (62)] T = 26.7638. The coefficient of e2
is 6. The reduction due to t and its expectation has already been done for Method 1.
18

th

Another way of formulating a reduction and corresponding expectations is to compute


reduction as follows. Solve

0 0
0
r1
1
0

(63)
0 Wi Wi 0 o2 = r2 = r.
o3
r3
0 0
0
r = W0 y, where W = (X Z)
Red = r0 Qi r,

where Qi is some g-inverse of the coefficient matrix, (63). Then the coefficient of e2 in
the expectation is
0
rank (Qi ) = rank (Wi Wi ).
(64)
Coefficient of j2 is

tr Qi W0 Zj Zj W.

(65)

Let the entire vector of expectations be

e2
Red (1)
..
.

= P

Red (b + 1)
Then the unbiased estimators are

..
= P1
.

b
d
0 X
0X

e2
12
..
.
b2
0 X0 X

e2
Red (1)
..
.

(66)

Red (b + 1)

provided P1 exists. If it does not, Method 3 estimators, at least with the chosen b + 1
reductions, do not exist. In our example

e2
1
10
Red (ts)

=
Red (ts) 6
Red (t)
3

0
39
26.7638
15.7222

e2
2

ts

s2
d
0 X
0X

0
39
39
15.7222

0
1
1
1

.3945
.5240
.0331
2011.89

1.
0
0
0
.32690
.08172 .08172
0
.02618 .03877
.08172 .04296
1.72791 .67542
0 1.67542
19

e2
2
ts
s2
0 X0 X

.3945
2037.56
2029.57
2021.83

These are different from the Method 1 estimates.

10

A Simple Method for General X

We now present a very simple method for the general X model provided an easy
g-inverse of X0 X can be obtained. Write the following equations.

Z1 PZ1 Z1 PZ2 Z1 PZb


u1
0
0
0

Z2 PZ1 Z2 PZ2 Z2 PZb u2


.
..
.
.
.
0
0
0
ub
Zb PZ1 Zb PZ2 Zb PZb

Z1 Py
0
Z2 Py
..
.
0

(67)

Zb Py

P = I X(X0 X) X0 . o is absorbed from the least squares equations to obtain (67).


We could then compute b reductions from (67) and this would be Method 3. An easier
method, however, is described next.
0

Let Di be a diagonal matrix formed from the diagonals of Zi PZi . Then compute the
following b quadratics,
0
y0 PZi D1
(68)
i Zi Py.
This computation is simple because D1
is diagonal. It is simply the sum of squares of
i
0
elements of Zi Py divided by the corresponding element of Di . The expectation is also
easy. It is
Xs
0
2
1 0
(69)
PZ
Z
tr
D
Z
qi e2 +
j
j PZi j .
i
i
j=1
0

Because Di1 is diagonal we need to compute only the diagonals of Zi PZj Zj PZi to find
the last term of (69). Then as in Methods 1 and 3 we find some estimate of e2 and equate

e2 and the s quadratics of (68) to their expectations.


Let us illustrate the method with our same example, regarding t as fixed.
39 18 12 9

18 0 0

X0 X =

12 0
9

and a g-inverse is
0 0
0
0

1
18
0
0

121 0
91

The coefficient matrix of equations like (67) is in (70), (71) and (72) and the right
hand side is (.1111, 4.6111, .4444, -5.1667, 3.7778, 2.1667, .4444, -6.3889, -1, 1, 0, -2.6667,
1.4444, 1.2222)0 .
20

Upper left 7 7

9.3611 5.0000 1.4722 2.8889


4.4444 1.3333 .8889

6.7222
.6667
1.0556
1.3333
2.5 .3333

2.6944 .5556 .8889 .3333 1.7778

4.5 2.2222 .8333 .5556

4.4444
1.3333
.8889

2.5 .3333

1.7778

(70)

Upper right 7 7 and (lower left 7 7)

2.2222
2.9167 2.3333 .5833
2.0 1.3333 .6667

.8333 2.3333
2.6667 .3333 1.3333
1.5556 .2222

.5556 .5833 .3333


.9167
0
0
0

3.6111
0
0
0 .6667 .2222
.8889

2.2222
0
0
0
0
0
0

.8333
0
0
0
0
0
0

.5556
0
0
0
0
0
0

(71)

Lower right 7 7

3.6111

0
0
0
2.9167 2.3333 .5833
2.6667 .3333
.9167

0
0
0

0
0
0

0
0
0

0
0
0

2.0 1.3333 .6667

1.5556 .2222

.8889

(72)

The diagonals of the variance of the reduced right hand sides are needed in this
method and other elements are needed for approximate MIVQUE in Chapter 11. The
2
coefficients of e2 in this variance are in (70), . . . , (72). The coefficients of ts
are in (73),
0
(74) and (75). These are computed by (Cols. 5-14 of 10.70) (same) .
Upper left 7 7

47.77 24.54 5.31 17.93


27.26 7.11 3.85

25.75
.39
1.60
7.11
8.83
.22

5.66
.74 3.85
.22
4.37

20.27 16.30 1.94 .74

27.26 7.11 3.85

8.83
.22

4.37
21

(73)

Upper right 7 7 and (lower left 7 7)

16.30
14.29 12.83 1.46
6.22 4.59 1.63

1.94 12.83
12.67
.17 4.59
4.25
.35

.74 1.46
.17
1.29
0
0
0

18.98
0
0
0 1.63
.35
1.28

16.30
0
0
0
0
0
0

1.94
0
0
0
0
0
0

.74
0
0
0
0
0
0

(74)

Lower right 7 7

18.98

0
0
0
14.29 12.83 1.46
12.67
.17
1.29

0
0
0

0
0
0

0
0
0

0
0
0

6.22 4.59 1.63

4.25
.35

1.28

(75)

The coefficients of s2 are in (76), (77), and (78). These are computed by (Cols 1-4
of 10.70) (same)0 .
Upper left 7 7

123.14 76.39 12.81 33.95


56.00 22.08 7.67

71.75
1.67
2.97
28.25
24.57
1.60

10.18
.96 6.81
.14
6.63

30.02 20.94 2.35 .57

27.26
7.11
3.85

8.83
.22

4.37

(76)

Upper right 7 7 and (lower left 7 7)

26.25
39.83 34.69 5.14
27.31 19.62 7.70

2.07 29.88
29.81
.06 18.26
17.36
.90

.32 4.31
.76
3.55 1.69
1.05
.64

23.86 5.64
4.11
1.53 7.37
1.21
6.16

16.30
16.59 13.63 2.96
12.15 7.51 4.64

1.94 9.53
9.89 .36 5.44
5.85 .41

.74 2.85
.59
2.26
.96
.79
.17

22

(77)

Lower right 7 7

18.98 4.21
3.15
1.06 5.74
.86
4.88

14.29 12.83 1.46


8.94 7.52 1.43

12.67
.17 8.22
7.26
.96

1.29 .72
.26
.46

6.22 4.59 1.63

4.25
.35

1.28

(78)

The reduction for ts is


3.7782
1.2222
+ +
= 23.799.
4.444
.889
2
+ s2 ), where 10 is the number of elements in the
The expectation is 10 e2 + 35.7262 (ts
ts vector and
27.259
1.284
35.7262 =
+ +
.
4.444
.889
The reduction for s is

(5.167)2
.1112
+ +
= 9.170.
9.361
4.5
2
+ 34.2770 s2 , where
The expectation is 4 e2 + 15.5383 ts

47.773
20.265
+ +
.
9.361
4.5
30.019
123.144
+ +
.
34.2770 =
9.361
4.5
15.5383 =

Thus
e2
1
0
0

e2

2
E Red(ts) = 10 35.7262 35.7262 ts .
s2
Red(s)
4 15.5383 34.2770

Then

e2
1
0
0
.3945
.3945
2

ts = .29855
.05120 .05337 23.799 = .6114

.

s2
.01864 .02321
.05337
9.170
.0557

11

Hendersons Method 2

Hendersons Method 2 (1953) is probably of interest from an historical viewpoint


only. It has the disadvantage that random by fixed interactions and random within fixed
23

nesting are not permitted. It is a relatively easy method, but usually no easier than
the method described in Sect. 10.10, absorption of , and little if any easier than an
approximate MIVQUE procedure described in Chapter 11.
Method 2 involves correction of the data by a least squares solution to excluding
. Then a Method 1 analysis is carried out under the assumption of a model
y = 1 +

X
i

Zi ui + e.

If the solution to o is done as described below, the expectations of the Method 1


reductions are identical to those for a truly random model except for an increase in the
coefficients of e2 . Partition
Z = [Za Zb ]
such that rank
(Za ) = rank (Z).
Then partition
X = (Xa Xb )
such that
rank (Xa Za ) = rank (X Z).
See Henderson, Searle, and Schaeffer (1974). Solve equations (79) for a .
0

X a Xa Xa Z a
0
0
Za Xa Za Za

a
ua

Xa y
0
Za y

(79)

Let the upper submatrix (pertaining to a ) of the inverse of the matrix of (79) be denoted
by P. This can be computed as
0

P = [Xa Xa Xa Za (Za Za )1 Za Xa ]1 .

(80)

1 0 y = 1 0 y 1 0 Xa a .
0
0
0
Zi y = Zi y Zi Xa a i = 1, . . . , b.

(81)
(82)

Now compute

Then compute the following quadratics


(10 y )2 /n, and
(Zi y )0 (Zi Zi )1 (Zi y ) for i = 1, . . . , b.
0

24

(83)
(84)

The expectations of these quadratics are identical to those with y in place of y except
for an increase in the coefficient of e2 computed as follows. Increase in coefficient of e2
in expectation of (83) by
trP(X0a 110 Xa )/n.
(85)
Increase in the coefficient of e2 in expectation of (84) is
0

trP(Xa Zi (Zi Zi )1 Zi Xa ).
0

(86)

Note that Xa Zi (Zi Zi )1 Zi Xa is the quantity that would be subtracted from Xa Xa if


we were to absorb ui . e2 can be estimated in a number of ways but usually by the
conventional residual
[y0 y ( o )0 X0 y (uo )0 Z0 y]/[n rank (X Z)].
Sampling variances for Method 2 can be computed by the same procedure as for
Method l except that !
the variance of adjusted right hand sides of and u equations is
1 0 Xa
0
0
increased by
P(Xa 1
Xa Z) e2 over the unadjusted. As is true for other
0
Z Xa
quadratic estimators, quadratics in the adjusted right hand sides are uncorrelated with
e2 , the OLS residual mean square.
2
We illustrate Method 2 with our same data, but now we assume that ts
does not
exist. This 2 way mixed model could be done just as easily by Method 3 as by Method 2,
but it suffices to illustrate the latter. Delete and t3 and include all 4 levels of s. First
solve for a in these equations.

18

0
12

8 3 2 5
7 4 1 0

21 0 0 0

9 0 0

3 0
6

a
ua

113
96
153
71
21
34

The solution is a = [-1.31154, .04287]0 , ua = (7.7106, 8.30702, 7.86007, 6.75962)0 . The


adjusted right hand sides are

279
153
71
21
34

18 12

8 7

3 4

2 1

5 0

1.31154
.04287

302.093
163.192
74.763
23.580
40.558

Then the sum of squares of adjusted right hand sides for sires is
(163.192)2
(40.558)2
+ +
= 2348.732.
12
6
25

The adjusted C.F. is (302.093)2 /39 = 2340.0095. P is the upper 2x2 of the inverse of the
coefficient matrix (79) is
P =

.179532 .110888
.200842

21 0 0 0

9 0 0

3 0
6

X0a Zi (Z0i Zi )1 Z0i Xa =

8 3 2 5
7 4 1 0

9.547619 4.666667
4.444444

8
3
2
5

7
4
1
0

The trace of this (86) is 3.642 to be added to the coefficient of e2 in E (sires S.S). The
trace of P times the following matrix
18
12

39

1

(18 12) =

8.307692 5.538462
3.692308

gives 3.461 to be added to the coefficient of e2 in E(CF). Then


E(Sire SS) = 7.642 e2 + 39 s2 + a quadratic.
E(C.F.) = 4.461 e2 + 14.538 s2 + the same quadratic.
Then taking some estimate of e2 one equates these expectations to the computed sums
of squares.

12

An Unweighted Means ANOVA

A simple method for testing hypotheses approximately is the unweighted means analysis described in Yates (1934). This method is appropriate for the mixed model described
in Section 4 provided that every subclass is filled and there are no covariates. The smallest subclass means are taken as the observations as in Section 6 in Chapter 1. Then a
conventional analysis of variance for equal subclass numbers (in this case 1) is performed.
The expectations of these mean squares, except for the coefficients of e2 are exactly the
same as they would be had there actually been only one observation per subclass. An
algorithm for finding such expectations is given in Henderson (1959).
The coefficient of e2 is the same in every mean square. To compute this let s =
the number of smallest subclasses, and let ni be the number of observations in the ith
subclass. Then the coefficient of e2 is
Xs
i=1

n1
i /s.
26

(87)

Estimate e2 by
[y 0 y

Xs
i=1

yi2 /ni ]/(n s),

(88)

where yi is the sum of observations in the ith subclass. Henderson (1978a) described a
simple algorithm for computing sampling variances for the unweighted means method.
We illustrate estimation by a two way mixed model,
yijk = ai + bj + ij + eijk .
bj is fixed.
V ar(a) = Ia2 , V ar() = I2 , V ar(e) = Ie2 .
Let the data be

A
1
2
3
4

1
5
2
1
2

nij
B
2
4
10
4
1

3
1
5
2
5

1
8
7
6
10

yij.
B
2 3
10 5
8 4
9 3
12 8

The mean squares and their expectation in the unweighted means analysis are
df
A
3
B
2
AB 6

MS
9.8889
22.75
.3056

E(ms)
.475 e2 + 2 + 3a2
.475 e2 + 2 + Q(b)
.475 e2 + 2

Suppose
e2 estimated as described above is .2132. Then

2 = .3056 .475(.2132) = .2043,


and

a2 = (9.8889 .3056)/3 = 3.1944.


The coefficient of e2 is (51 + 41 + ... + 51 )/12 = .475.

27

13

Mean Squares For Testing K0u

Section 2.c in Chapter 4 described a general method for testing the hypothesis, K0 =
0 against the unrestricted hypothesis. The mean square for this test is
( o )0 K(K0 CK)1 K0 o /f.
C is a symmetric g-inverse of the GLS equations or is the corresponding partition of a
g-inverse of the mixed model equations and f is the number of rows in K0 chosen to have
full row rank. Now as in other ANOVA based methods of estimation of variances we can
compute as though u is fixed and then take expectations of the resulting mean squares to
estimate variances. The following precaution must be observed. K0 u must be estimable
under a fixed u model. Then we compute
(uo )0 K(K0 CK)1 K0 uo /f,

(89)

where uo is some solution to (90) and f = number of rows in K0 .


X0 X X 0 Z
Z0 X Z0 Z

o
uo

X0 y
Z0 y

(90)

The assumption is that V ar(e) = Ie2 . C is the lower q q submatrix of a g-inverse of


the coefficient matrix in (90). Then the expectation of (89) is
f 1 tr K(K0 CK)1 K0 V ar(u) + e2 .

(91)

This method seems particularly appropriate in the filled subclass case for then with interactions it is relatively easy to find estimable functions of u. To illustrate, consider the
two way mixed model of Section 11. Functions for estimating a2 are

a1 + 1. a4 4.

2. a4 4.
a2 +
/3.
a3 + 3. a4 4.
Functions for estimating 2 are
[ij i3 4j + 34 ]/6; i = 1, 2, 3; j = 1, 2.
This is an example of a weighted square of means analysis.
The easiest solution to the OLS equations for the 2 way case is ao , bo = null and
= yij . Then the first set of functions can be estimated as i.o 4.o (i = 1, 2, 3).
Reduce K0 to this same dimension and take C as a 12 12 diagonal matrix with diagonal
elements = n1
ij .
ijo

28

Chapter 11
MIVQUE of Variances and Covariances
C. R. Henderson
1984 - Guelph

The methods described in Chapter 10 for estimation of variances are quadratic,


translation invariant, and unbiased. For the balanced design where there are equal numbers of observations in all subclasses and no covariates, equating the ANOVA mean squares
to their expectations yields translation invariant, quadratic, unbiased estimators with
minimum sampling variance regardless of the form of distribution, Albert (1976), see also
Graybill and Wirtham (1956). Unfortunately, such an estimator cannot be derived in the
unbalanced case unless G and R are known at least to proportionality. It is possible,
however, to derive locally best, translation invariant, quadratic, unbiased estimators under the assumption of multivariate normality. This method is sometimes called MIVQUE
and is due to C.R. Rao (1971). Additional pioneering work in this field was done by La
Motte (1970,1971) and by Townsend and Searle (1971). By locally best is meant that
=G and R
=R, the MIVQUE estimator has minimum sampling variance in the
if G
and R
are prior values
class of quadratic, unbiased, translation invariant estimators. G
of G and R that are used in computing the estimators. For the models which we have
described in this book MIVQUE based on the mixed model equations is computationally
advantageous. A result due to La Motte (1970) and a suggestion given to me by Harville
have been used in deriving this type of MIVQUE algorithm. The equations to be solved
are in (1).
1 X X0 R
1 Z
X0 R
1 X Z0 R
1 Z + G
1
Z0 R

1 y
X0 R
1 y
Z0 R

(1)

These are mixed model equations based on the model


y = X + Zu + e.

(2)

We define V ar(u), V ar(e) and V ar(y) as in (2, 3, 4, 5, 6, 7, 8) of Chapter 10.

La Motte Result For MIVQUE


La Motte defined
V ar(y) = V =

Xk
i=1

Vi i .

(3)

Then
=
V

Xk
i=1

Vi i ,

(4)

where i are prior values of i . The i are unknown parameters and the Vi are known
matrices of order n n. He proved that MIVQUE of is obtained by computing
1 Vi V
1 (y X o ), i = 1, . . . , k,
(5)
(y X o )0 V
equating these k quadratics to their expectations, and then solving for . o is any
solution to equations
1 y.
1 X o = X0 V
(6)
X0 V

These are GLS equations under the assumption that V = V.

Alternatives To La Motte Quadratics

In this section we show that other quadratics in y X o exist which yield the same
estimates as the La Motte formulation. This is important because there may be quadratics
easier to compute than those of (5), and their expectations may be easier to compute.
Let the k quadratics of (5) be denoted by q. Let E(q) = B, where B is k k. Then
provided B is nonsingular, MIVQUE of is
= B1 q.

(7)
Let H be any k k nonsingular matrix. Compute a set of quadratics Hq and equate to
their expectations.
E(Hq) = HE(q) = HB.
(8)
Then an unbiased estimator is
o = (HB)1 Hq

= B1 q = ,

(9)

the MIVQUE estimator of La Motte. Therefore, if we derive the La Motte quadratics, q,


for MIVQUE, we can find another set of quadratics which are also MIVQUE, and these
are represented by Hq, where H is nonsingular.

Quadratics Equal To La Mottes


The relationship between La Mottes model and ours is as follows

V of LaMotte =

G11 g11
0
G12 g12
0
G13 g13
..
.

G12 g12
G22 g22
0
G23 g23
..
.
2

G13 g13
G23 g23
0
Z
G33 g33

..
.

R11 r11
0
R12 r12
0
R13 r13
..
.

or V1 1 =

V2 2 =

G11
0
0
..
.

0
0
G12
0
..
.

R11
0
0
..
.

0
0
0
..
.

R13 r13
R23 r23

R33 r33

..
.

R12 r12
R22 r22
0
R13 r23
..
.

= ZGZ0 + R.

0 0
0 0
0
Z g11 ,
0 0

.. ..
. .

G12
0
0
..
.

0
0
0
Z g12 ,
0

..
.

(10)

(11)

(12)

etc., and

Vb+1 b+1 =

Vb+2 b+2 =

0
0
R12
0
..
.

R12
0
0
..
.

0
0

r11 ,
0

..
.

0
0

r12 ,
0

..
.

(13)

etc. Define the first b(b + 1)/2 of (12) as ZGij Z0 and the last c(c + 1)/2 of (13) as Rij .
Then for one of the first b(b + 1)/2 of La Mottes quadratic we have
1 ZG Z0 V
1 (y X o ).
(y X o )0 V
ij

(14)

1 ZG
G
1 G G
1 GZ
0V
1 (yX o ).
(y X o )0 V
ij

(15)

Write this as
G
1 = I. Now note that GZ
0V
1 (y X o ) = u
= BLUP
This can be done because G

of u given G = G and R = R. Consequently (15) can be written as


1 G G
1 u
0G
.
u
ij

(16)

By the same type of argument the last c quadratics are


1 R R
1 e
0 R
,
e
ij
3

(17)

and R = R.
Taking into account that
is BLUP of e given that G = G
where e

G12 g12

G22 g22

..
.

G11 g11
0

G = G12 g12
..
.

can be computed easily. Let


the matrices of the quadratics in u

1 =

C11
0
C12
0
C13
..
.

C12
C22
0
C23
..
.

C13
C23

= [C1 C2 C3 . . .].
C33

..
.

For example,

C2 =

C12
C22
0
C23
..
.

Then
1 G G
1 = Ci Gii C0 .
G
i
ii
1 = Ci Gij C0 + Cj G0 Ci for i 6= j.
1 G G
G
j
ij
ij

(18)
(19)

1 , Rij substituted for G


1 , Gij and
are like (18) and (19) with R
The quadratics in e
1 = [C1 C2 . . .]. For special cases these quadratics simplify considerably. First
with R
consider the case in which all gij = 0. Then

G =

G11 g11
0
0

0
G22 g22
0

,
0
0
G33 g33

..
..
..
.
.
.

and
1
G1
0

11 g11

1 1

0
G
g
.
=
22 22

..
..
.
.

G1

become
Then the quadratics in u

2
i G1
,
u
ii gii u

or an alternative is obviously

i G1
i,
u
ii u
4

obtained by multiplying these quadratics by


2
2
dg(g11
, g22
, . . .).

(20)

can be converted to
Similarly if all rij = 0, the quadratics in e
0

i R1
i .
e
ii e

(21)

The traditional mixed model for variance components reduces to a particularly simple
form. Because all gij = 0, for i 6= j, all Gii = I, and R = I, the quadratics can be written
as
0
iu
i i = 1, . . . , b, and e
0 e
.
u
Pre-multiplying these quadratics by

we obtain

1
0
..
.

0
1
..
.

0
0

e2 /12 e2 /22 1

1u
1
u
0
2u
2
u
..
.
0 e
+
e

P e2
i 2
i

iu
i
u

But the last of these quadratics is y0 y y0 X o y0 Z


u, or a quantity corresponding to
the least squares residual. This is the algorithm described in Henderson (1973).
One might wish in this model to estimate e2 by the OLS residual mean square, that
is,

e2 = [y0 y ( o )0 X0 y (uo )0 Z0 y]/[n rank(X Z)],


where o , uo are some solution to OLS equations. If this is done,
e2 is not MIVQUE and
neither are
i2 , but they are probably good approximations to MIVQUE.
Another special case is the multiple trait model with additive genetic assumptions
and elements of u for missing observations included. Ordering animals within traits,
0

y0 = [y1 y2 . . .].
0
yi
is the vector of records on the ith trait.
0
0
u0 = (u1 u2 . . .).
ui
is the vector of breeding values for the ith trait.
0
0
e0 = (e1 e2 . . .).

Every ui vector has the same number of elements by including missing u. Then

Ag11

Ag
12
G =
..
.
where

Ag12

Ag22 = A G0 ,

..
.

g11

g
G0 =
12
..
.

(22)

g12

g22

..
.

is the additive genetic variance- covariance matrix for a non-inbred population, and
denotes the direct product operation.
A1 g 11

A1 g 12
=

..
.

G1

A1 g 12

A1 g 22 = A1 G1
0 .

..
.

(23)

illustrated for a 3 trait


Applying the methods of (16) to (18) and (19) the quadratics in u
model are

0
1
1 A1 u
u

0 1
1A u
2

! u
0
B11 B12
3
1 A1 u

u
.
0 1
0

B12 B22 u
2
2A u

0
u
1
3

2A u
0
3 A1 u
3
u
g 11 g 11 2g 11 g 12
2g 11 g 13

11 22
12 12
2g g + 2g g
2g 11 g 23 + 2g 12 g 13
=
.
11 33
13 13
2g g + 2g g

B11

g 12 g 12 2g 12 g 13
g 13 g 13

=
2g 12 g 22 2g 12 g 23 + 2g 13 g 22 2g 13 g 23 .
2g 12 g 23 2g 12 g 33 + 2g 13 g 23 2g 13 g 33

B12

g 22 g 22 2g 22 g 23
g 23 g 23

2g 22 g 33 + 2g 23 g 23 2g 23 g 33
=
.
g 33 g 33

B22

i by the inverse of
Premultiplying these quadratics in u

B11 B12
0
B12 B22

we obtain an equiv-

alent set of quadratics that are


0

i A1 u
j for i = 1, . . . , 3; j = i, . . . , 3.
u

(24)

Similarly if there are no missing observations,

I r12

I r22

..
.

I r11

I
R = r12
..
.
0

are e
i e
j for i = 1, . . . , t; j = i, . . . , t.
Then quadratics in e

Computation Of Missing u

rather
In most problems, MIVQUE is easier to compute if missing u are included in u
1
than ignoring them. Section 3 illustrates this with A being the matrix of all quadratics
.
in u
Three methods for prediction of elements of u not in the model for y were described
in Chapter 5. Any of these can be used for MIVQUE. Probably the easiest is to include
the missing ones in the mixed model equations.

Quadratics In e With Missing Observations

are easier to envision if we


When there are missing observations the quadratics in e
order the data by traits in animals rather than by animals in traits. Then R is block
diagonal with the order of the ith diagonal block being the number of traits recorded for
1 nor even all of the diagonal blocks.
the ith animal. Now we do not need to store R
Rather we need to store only one block for each of the combinations of traits observed.
For example, with 3 traits the possible combinations are
Traits
Combinations 1 2 3
1
X X X
2
X X 3
X - X
4
- X X
5
X - 6
- X 7
- - X
There are 2t 1 possible combinations for t traits. In the case of sequential culling the
possible types are
7

Traits
Combinations 1 2 3
1
X X X
2
X X 3
X - There are t possible combinations for t traits.
1 for animals with the same traits measured will be identical. Thus
The block of R
1 ,
if 50 animals have traits 1 and 2 recorded, there will be 50 identical 2 2 blocks in R
and only one of these needs to be stored.
. All of the quadratics
The same principle applies to the matrices of quadratics in e
0
i Q
i refers to the subvector of e
pertaining to the ith animal.
are of the form e
ei , where e
But animals with the same record combinations, will have identical matrices of quadratics
for estimation of a particular variance or covariance. The computation of these matrices
1 be P, which is symmetric
is simple. For a particular set of records let the block in R
and with order equal to the number of traits recorded. Label rows and columns by trait
number. For example, suppose traits 1, 3, 7 are recorded. Then the rows (and columns)
of P are understood to have labels 1, 3, 7. Let
P (p1 p2 p3 . . .),
where pi is the ith column vector of P. Then the matrix of the quadratic for estimating
rii is
0
pi pi .
(25)
The matrix for estimating rij (i 6= j) is
0

pi pj + pi pj .

(26)

Let us illustrate with an animal having records on traits 2, 4, 7. The block of R corresponding to this type of information is

6 4 3

8 5

.
7
Then the block corresponding to R1 is the inverse of this, which is

.25410 .10656 .03279

.27049 .14754

.
.26230
Then the matrix for estimation of r22 is

.25410

.10656 (.25410 .10656 .03279).


.03279
8

The matrix for computing r27 is

.25410

.10656

(.03279 .14754 .26230)


.03279
+ the transpose of this product.

And e
Expectations Of Quadratics In u

and in e
to their
MIVQUE can be computed by equating certain quadratics in u
expectations. To find the expectations we need a g-inverse of the mixed model coefficient
and R,
prior values, substituted for G and R. The formulas for these
matrix with G
expectations are in Section 6 of Chapter 10. It is obvious from these descriptions of
expectations that extensive matrix products are required. However, some of the matrices
have special forms such as diagonality, block diagonality, and symmetry. It is essential
that these features be exploited. Also note that the trace of the products of several
matrices can be expressed as the trace of the product of two matrices, say trace (AB).
Because only the sum of the diagonals of the product AB is required, it would be foolish
to compute the off-diagonal elements. Some special computing algorithms are
trace (AB) =

X X
i

aij bji

when A and B are nonsymmetric.


X
X X
trace (AB) =
a
b
+
2
a b
ii
ii
i
i
j>i ij ij
when A and B are both symmetric.
X
tr (AB) =
a b
i ii ii
when either A or B or both are diagonal.

(27)
(28)
(29)

It is particularly important to take advantage of the form of matrices of quadratics


in animal models. When the data are ordered by traits within animals the necessary
in e
P 0
i Q
quadratics have the form i e
ei , where Qi is a block of order equal to the number of
th
traits observed in the i animal. Then the expectation of this quadratic is tr Q V ar(
ei ).
Consequently we do not need to compute all elements of V ar(
e), but rather only the
elements in blocks down the diagonal corresponding to the various Q. In some cases,
depending upon the form of X, these blocks may be identical for animals with the same
traits observed.
Many problems are such that the coefficient matrix is too large for computation of a
g-inverse with present computers. Consequently we present in Section 7 an approximate
MIVQUE based on computing an approximate g-inverse.

Approximate MIVQUE

MIVQUE for large data sets is prohibitively expensive with 1983 computers because
a g-inverse of a very large coefficient matrix is required. Why not use an approximate ginverse that is computationally feasible? This was the idea presented by Henderson(1980).
The method is called Diagonal MIVQUE by some animal breeders. The feasibility of
this method and the more general one presented in this section requires that an approx 1 X can be computed easily. First absorb o from the mixed
imate g-inverse of X0 R
model equations.
1 X X0 R
1 Z
X0 R
1 Z + G
1
1 X Z0 R
Z0 R

1 y
X0 R
1 y
Z0 R

(30)

This gives
1 ] u
= Z0 Py
[Z0 PZ + G
(31)
1 R
1 X(X0 R
1 X) X0 R
1 , and (X0 R
1 X) is chosen to be symmetric.
where P = R
,
From the coefficient matrix of (31) one may see some simple approximate solution to u

. Corresponding to this solution is a matrix C11 such that


say u
11 Z0 Py
=C
u

(32)

11 as an approximation to
Interpret C
1 ]1
C11 = [Z0 PZ + G
,
Then given u
= (X0 R
1 X) (X0 R
1 y X0 R
1 Z

u).
Thus an approximate g-inverse to the coefficient matrix is
=
C

00 C
01
C
10 C
11
C

0
C
1
C

(33)

00 = (X0 R
1 X) + (X0 R
1 X) X0 R
1 ZC
11 Z0 R
1 X(X0 R
1 X) .
C
01 = (X0 R
1 X) X0 R
1 ZC
11 .
C
10 = C
11 Z0 R
1 X(X0 R
1 X) .
C

This matrix post-multiplied by

1 y
X0 R
1 y
Z0 R

equals

symmetric.
10

11 may be non. Note that C

What are some possibilities for finding an approximate easy solution to u and conse 11 ? The key to this decision is the pattern of elements of the matrix
quently for writing C
of (31). If the diagonal is large relative to off-diagonal elements of the same row for every
11 to the inverse of a diagonal matrix formed from the diagonals of the corow, setting C
efficient matrix is a logical choice. Harville suggested that for the two way mixed variance
components model one might solve for the main effect elements of u by using only the
diagonals, but the interaction terms would be solved by adjusting the right hand side for
the previously estimated associated main effects and then dividing by the diagonal. This
11 .
would result in a lower triangular C
The multi-trait equations would tend to exhibit block diagonal dominance if the
11 might well take the form
elements of u are ordered traits within animals. Then C
B1
1

..
.

.
B1
2

..
..
.
.

and
and
where B1
is the inverse of the ith diagonal block, Bi . Having solved for u
i

and e
as in regular
having derived C one would then proceed to compute quadratics in u
MIVQUE. Their expectations can be found as described in Section 7 of Chapter 10 except
is substituted for C.
that C

MIVQUE (0)

MIVQUE simplifies greatly in the conventional variance components model if the


priors are
gii /
r11 =
i2 /
e2 = 0 for all i = 1, . . . , b.
Now
= I
V
e2 , o = (X0 X) X0 y,
and
1 Zi Gii Z0 V
1 (y X o )
(y X o )0 V
i
0
= y0 (I X(X0 X) X0 )0 Zi Zi (I X(X0 X) X0 )y/
e4 .

(34)

Note that this, except for the constant,


e4 , is simply the sum of squares of right hand
sides of the OLS equations pertaining to uoi after absorbing o . This is easy to compute,
and the expectations are simple. Further, for estimation of e2 we derive the quadratic,
y0 y ( o )0 X0 y,
11

(35)

and the expectation of this is simple.


This method although simple to compute has been found to have large sampling
variances when i2 /e2 departs very much from 0, Quaas and Bolgiano(1979). Approximate
11 is not much more difficult and gives substantially smaller
MIVQUE involving diagonal C
2
2
variances when i /e > 0.
= 0 the MIVQUE computations are effected as follows.
For the general model with G
This is an extension of MIVQUE(0) with R 6= Ie2 , and Var (ui ) 6= Ii2 . Absorb o from
equations
!
!
!
1 X X0 R
1 Z
1 y
X0 R
o
X0 R
=
(36)
1 X Z0 R
1 Z
1 y .
uo
Z0 R
Z0 R
Then compute
1 y (y0 R
1 X)(X0 R
1 X) X0 R
1 y
y0 R

(37)

and ri Gij rj i=1, . . . , b; j=i, . . . , b, where ri = absorbed right hand side for
Estimate rij from following quadratics
ij e
i R
j
e

uoi

equations.

where
1 X) X0 R
1 ]y.
= [I X(X0 R
e

(38)

MIVQUE For Singular G

The formulation of (16) cannot be used if G is singular, neither can (18) if Gii is
singular, nor (25) if A is singular. A simple modification gets around this problem. Solve
in (51) of Chapter 5. Then for (16) substitute
for

0 G ,
where u
=G

(39)
ij

For (20) substitute

i Gii
i.

For (25) substitute

(40)

i A
j

(41)

See Section 16 for expectations of quadratics in .

10

MIVQUE For The Case R = Ie2


When R = Ie2 the mixed model equations can be written as
X0 X X 0 Z
1
Z0 X Z0 Z + e2 G

12

X0 y
Z0 y

(42)

are
If o is absorbed, the equations in u
1 )
(Z0 PZ + e2 G
u = Z0 Py,

(43)

where
P = I X(X0 X) X0 .
Let
1 )1 = C.
(Z0 PZ + e2 G

(44)

Then
= CZ0 Py.
u
0 Qi u
= y0 PZC0 Qi CZ0 Py
u
= tr C0 Qi CZ0 Pyy0 PZ.
) = tr C0 Qi C V ar(Z0 Py).
E(
u0 Qi u
V ar(Z0 Py) =

Xb

Xb

i=1
0

+Z

(45)
(46)
(47)
0

Z0 PZi Gij Zj PZgij

j=1
PPZe2 .

(48)

One might wish to obtain an approximate MIVQUE by estimating e2 from the OLS
residual. When this is done, the expectation of the residual is [n r(X Z)] e2 regardless
This method is easier than true MIVQUE and has advantages in
of the value of G.
computation of sampling variances because the estimator of e2 is uncorrelated with the
0 Q
various u
u. This method also is computable with absorption of o .
A further simplification based on the ideas of Section 7, would be to look for some
in (44). Call this solution u
and the corresponding
simple approximate solution to u

approximate g-inverse of the matrix of (44) C. Then proceed as in (46) . . . (48) except
for C.
for u
and C
substitute u

11

Sampling Variances

0 Qi u
, i = 1, . . . , b, where b = number of elements
MIVQUE consists of computing u
0
Qj e
, j = 1, . . . , t, where t = number of elements
of g to be estimated, and e
! of r to
C
be estimated. Let a g-inverse of the mixed model matrix be C
, and let
Cu
W = (X Z).
Then
1 y,
= Cu W0 R
u
1 WC0 Qi Cu W0 R
1 y y0 Bi y,
0 Q
u
u = y0 R
u
1 )y,
= (I WCW0 R
e
13

(49)

and
1 ]0 Qj [I WCW0 R
1 ]y y0 Fj y.
0 Qj e
= y0 [I WCW0 R
e
Let

y0 B1 y

..

(50)

y 0 F1 y

..
.

= P

g
r

g
r

= P, where =

Then MIVQUE of is
y 0 B1 y

..

y 0 H1 y

y 0 H2 y .
=
P1
0

y F1 y
..

.
..
.

(51)

Then
V ar(i ) = 2 tr[Hi V ar(y)]2 .
Cov(i , j ) = 2 trHi [V ar(y)] Hj [V ar(y)].

(52)
(53)

These are of course quadratics in unknown elements of g and r. A numerical solution is


= V ar(y) for some assumed values of g and r. Then
easier. Let V
2.
V ar(i ) = 2 tr(Hi V)
j V).

Cov(i , j ) = 2 tr(Hi VH

(54)
(55)

an approximation to C , the compuIf approximate MIVQUE is computed using C


u
, e
are used in place of C, u
, e
.
tations are the same except that C,

11.1

Result when e2 estimated from OLS residual

When R = Ie2 , one can estimate e2 by the residual mean square of OLS and an
approximate MIVQUE obtained. The quadratics to be computed in addition to
e2 are
0 Qi u
. Let
only u

!
!
0 Qi u

P
f
g
.
=
..
E
.
2

0
1

e2
Then

e2

P f
0 1

!1

0 H1 u

u
0 Qi u

u
0

u
H
u

2
..

=
..
.

e2
0

14

s1
e2
s2
e2
..
.

e2

(56)

Then
V ar(
gi ) = 2 tr[Hi V ar(
u)]2 + s2i V ar(
e2 ).
j ) = 2 tr[Hi V ar(
Cov(
gi , g
u) Hj V ar(
u)]
+ si sj V ar(
e2 ).
where

(57)
(58)

V ar(
u) = Cu [V ar(r)]Cu ,
and r equals the right hand sides of mixed model equations.
1 RR
1 W.
1 ZGZ0 R
1 W + W0 R
V ar(
u) = W0 R

(59)

If V ar(r) is evaluated with the same values of G and R used in the mixed model equations,
and R,
then
namely G
1 ZGZ
0R
1 W + W0 R
1 W.
V ar(r) = W0 R
V ar(
e2 ) = 2e4 /[n rank (W)],

(60)
(61)

where
e2 is the OLS residual mean square. This would presumably be evaluated for e2
=
e2 .

12
12.1

Illustrations Of Approximate MIVQUE


MIVQUE with
e2 = OLS residual

We next illustrate several approximate MIVQUE using as


e2 the OLS residual. The
same numerical example of treatments by sires in Chapter 10 is employed. In all of these
we absorb o to obtain the equations already presented in (70) to (72) in Chapter 10. We
2
are
use prior e2 /s2 = 10, e2 /ts
= 5 as in Chapter 10. Then the equations in s and ts
those of (70) to (72) with 10 added to the first 4 diagonals and 5 to the last 10 diagonals.
The inverse of this matrix is in (62), (63) and (64). This gives the solution
s0 = [.02966, .17793, .02693, .17520].
= [.30280, .20042, .05299, .55621, .04723,
ts
.04635, .00088, .31489, .10908, .20582].
0
ts
= .60183, s0s = .06396.
(ts)

15

Upper left 7 7

.0713 .0137 .0064 .0086 .0248


.0065
.0070

.0750 .0051 .0062


.0058 .0195
.0052

.0848 .0037
.0071
.0048 .0191

.0815
.0118
.0081
.0069

.1331
.0227
.0169

.1486
.0110

.1582

(62)

Upper right 7 7 and (lower left 7 7)0

.0112
.0085
.0071
.0268
.0273
.0177
.0138

.0178
.0121
.0057 .0147
.0088
.0060

.0126 .0177
.0050
.0089 .0129
.0040

.0061
.0052 .0113 .0004
.0001
.0003

.0009
.0004
.0006
.0063
.0040 .0102

.0092 .0066 .0027


.0081 .0045 .0036

.0058
.0071 .0014 .0039
.0054 .0015

.0025 .0011
.0036 .0003
.0004 .0001

(63)

Lower right 7 7

.1412 .0009 .0005 .0004 .0039 .0013


.0052

.1489 .0364 .0147


.0063 .0054 .0009

.1521 .0115 .0057


.0055
.0002

.1737 .0006 .0001


.0007

.1561
.0274
.0164

.1634
.0092

.1744

(64)

is
0 ts
The expectation of (ts)
0

E[r C02 C2 r] = trC02 C2 V ar(r),


where r = right hand sides of the absorbed equations, and C2 is the last 10 rows of the
inverse above.
2
+ matrix of (10.76)
V ar(r) = matrix of (10.73) to (10.75) ts
to (10.78) s2 + matrix of (10.70) to (10.72) e2 .

C02 C2 is in (65), (66), and (67). Similarly C01 C1 is in (68), (69), and (70) where C2 , C1
refer to last 10 rows and last 4 rows of (62) to (64) respectively. This leads to expectations
as follows
0

2
ts)
= .23851 e2 + .82246 ts
E(ts
+ .47406 s2 .
2
E(s0s) = .03587 e2 + .11852 ts
+ .27803 s2 .

16

Using
e2 = .3945 leads then to estimates,
2

ts
= .6815,
s2 = .1114.

Upper left 7 7

.1657 .0769 .0301 .0588 .3165


.0958
.0982

.1267 .0167 .0331


.0987 .2867
.0815

.0682 .0215
.0977
.0815 .2809

.1134
.1201
.1094
.1012

.01

1.9485
.6920
.5532

2.3159
.4018

2.5645

(65)

Upper right 7 7 and (lower left 7 7)

.01

.1224
.1065
.1018
.3307
.8063
.5902
.4805

.2567
.1577
.1007
.0017
.2207
.1364
.0689

.1594
.0973 .2418
.1380
.1038

.2466
.0889
.1368 .2127
.0758

.0901 .1909 .0029


.0012
.0041

.0029
.0046
.1078
.0759 .1837

.1480 .0726
.2062 .1135 .0927

.1821 .0457 .1033


.1505 .0473

.0411
.1101 .0059
.0085 .0026

(66)

Lower right 7 7

2.1231 .0153 .0071

2.3897 1.0959

2.4738

.01

.0082 .0970 .0456


.1426

.5144
.1641 .1395 .0247

.4303 .1465
.1450
.0016

3.0553 .0176 .0055


.0231

2.5572
.8795
.5633

2.7646
.3559

3.0808

(67)

Upper right 7 7

.5391 .2087 .1100 .1422 .1542


.0299
.0510

.5879 .0925 .1109


.0208 .1295
.0429

.7271 .0705
.0520
.0382 .1522

.6764
.0814
.0613
.0583
.01

.0840
.0145
.0199

.0510 .0091

.0488

17

(68)

Upper right 7 7 and (lower left 7 7)

.01

.0732
.0658
.0620
.2010
.0496
.0274
.0198

.1066
.0656
.0411 .0878
.0484
.0393

.0728 .1130
.0402
.0503 .0820
.0317

.0464
.0431 .0896 .0063
.0017
.0046

.0126
.0043
.0083
.0438
.0319 .0757

.0548 .0360 .0187


.0488 .0244 .0244

.0339
.0450 .0111 .0220
.0340 .0120

.0183 .0103
.0286 .0006
.0020 .0014

(69)

Lower right 7 7

.0968 .0025
.0014
.0012 .0261

.0514 .0406 .0108


.0366

.0485
.0079
.0335

.0187 .0031

.01

.0335

12.2

.0116
.0377

.0321 .0045

.0335
0

.0014
.0045

.0219 .0117

.0258 .0039

.0156

(70)

Approximate MIVQUE using a diagonal g-inverse

in the reduced equations by


An easy approximate MIVQUE involves solving for u
dividing the right hand sides by the corresponding diagonal coefficient. Thus the approx is diagonal with diagonal elements the reciprocal of the diagonals
imate C, denoted by C
of (10.70) to (10.72). This gives
= dg (.0516, .0598, .0788, .0690, .1059, .1333, .1475, .1161, .1263, .1304, .1690,
C
.1429, .1525, .1698)
and an approximate solution,
0 = (.0057, .2758, .0350, -.3563, .4000, .2889, .0656, -.7419, -.1263, .1304, 0, -.3810,
u
.2203, .2076).
0 ts
= 1.06794 with expectation,
Then (ts)
2
.4024 e2 + 1.5570 (ts
+ s2 ).

Also s0s = .20426 with expectation,


2
.0871 e2 + .3510 ts
+ .7910 s2 .

Now
18

C2 C2 = dg (0, 0, 0, 0, 1.1211, 1.7778, 2.1768, 1.3486, 1.5959, 1.7013, 2.8566, 2.0408,


2.3269, 2.8836)/100,
and
0

C1 C1 = dg (.2668, .3576, .6205, .4756, 0, 0, 0, 0, 0, 0, 0, 0,, 0, 0, 0, 0, 0, 0)/100.


Consequently one would need to compute only the diagonals of (10.70) to (10.72), if one
were to use this method of estimation.

12.3

Approximate MIVQUE using a block diagonal approximate g-inverse

Examination of (10.70) to (10.72) shows that a subset of coefficients, namely [sj , ts1j ,
ts2j . . .] tends to be dominant. Consequently one might wish to exploit this structure. If
ij were reordered by i within j and the interactions associated with si placed adjacent
the ts

to si , the matrix would exhibit block diagonal dominance. Consequently we solve for u
in equations with the coefficient matrix zeroed except for coefficients of sj and associated
tsij , etc. blocks. This matrix is in (71, 72, and 73) below.
Upper left 7 7

19.361 0
0
0

16.722 0
0

12.694 0

14.5

4.444
0
0
0
9.444

0
2.5
0
0
0
7.5

0
0
1.778
0
0
0
6.778

(71)

Upper right 7 7 and (lower left 7 7)

0
0
0
3.611
0
0
0

2.917
0
2
0
0
0
0

0
2.667
0
0
0
0
0

0
0
.917
0
0
0
0

2.
0
0
0
0
0
0

0
1.556
0
0
0
0
0

0
0
0
.889
0
0
0

(72)

Lower right 7 7
dg (8.611, 7.9167, 7.6667, 5.9167, 7.0, 6.556, 5.889)
19

(73)

A matrix like (71) to (73) is easy to invert if we visualize the diagonal blocks with reordering. For example,
19.361 4.444 2.917 2.000

9.444 0
0

7.917 0
7.000

.0640 .0301 .0236 .0183

.1201
.0111
.0086

.
=

.1350
.0067
.1481

This illustrates that only 42 or 32 order matrices need to be inverted. Also, each of those
has a diagonal submatrix of order either 3 or 2. The resulting solution vector is
(-.0343, .2192, .0271, -.2079, .4162, .2158, .0585, -.6547, -.1137, .0542, -.0042, -.3711,
.1683, .2389).
0 ts
= .8909 with expectation
This gives (ts)
2
.32515 e2 + 1.22797 ts
+ .79344 s2 ,

and s0s = .0932 with expectation


2
.05120 e2 + .18995 ts
+ .45675 s2 .

12.4

Approximate MIVQUE using a triangular block diagonal


approximate g-inverse

Another possibility for finding an approximate solution is to compute s by dividing


are solved by adjusting the
the right hand side by the corresponding diagonal. Then ts
right hand side for the associated s and dividing by the diagonal coefficient. This leads
to a block triangular coefficient matrix when ts are placed adjacent to s. Without such
re-ordering the matrix is as shown in (74), (75), and (76).
Upper left 7 7

19.361
0
0
0
0
0
0
0
16.722
0
0
0
0
0
0
0
12.694 0
0
0
0
0
0
0
14.5
0
0
0
4.444
0
0
0 9.444 0
0
0
2.5
0
0
0
7.5
0
0
0
1.778
0
0
0 6.778

Upper right 7 7 = null matrix

20

(74)

Lower left 7 7

0
0
0 3.611 0 0 0

2.917
0
0
0
0 0 0

0
2.667 0
0
0 0 0

0
0
.917
0
0 0 0

2.0
0
0
0
0 0 0

0
1.556 0
0
0 0 0

0
0
0
.889 0 0 0

(75)

dg (8.611, 7.917, 7.667, 5.917, 7.0, 6.556, 5.889)

(76)

Lower right 7 7

This matrix is particularly easy to invert. The inverse has the zero elements in exactly
the same position as the original matrix and one can obtain these by inverting triangular
blocks illustrated by

19.361
0
0
0
4.444 9.4444
0
0
2.917
0
7.917
0
2.000
0
0
7.000

.0516
0
0
0
.0243 .1059
0
0
.0190
0
.1263
0
.0148
0
0
.1429

This results in the solution


(.0057, .2758, .0350, -.3563, .3973, .1970, .0564, -.5925, -.1284, .0345, -.0054, -.3826, .1549,
.2613).
0 ts
= .80728 with expectation
This gives (ts)
2
.30426 e2 + 1.12858ts
+ .60987 s2 ,

and s0s = .20426 with expectation


2
+ .79104 s2 .
.08714 e2 + .35104ts

13

An Algorithm for R = Re2 and Cov (ui, uj ) = 0


Simplification of MIVQUE computations result if
0

R = R e2 , Var (ui ) = Gi e2 ; and Cov (ui , uj ) = 0.

21

R and the Gi are known, and we wish to estimate e2 and the i2 . The mixed model
equations can be written as

X0 R1
X
0
1
Z1 R X
0
Z2 R1
X
..
.

X0 R1
Z1
0
1
Z1 R Z1 + G1
1 1
0
1
Z2 R Z1
..
.

X0 R1
...
Z2
0
1
Z1 R Z2
...

0
1
1
Z2 R Z2 + G2 2 . . .

..
.

o
1
u
2
u
..
.

X0 R1
y
0
Z1 R1
y
0
Z2 R1
y
..
.

(77)

i = prior values of e2 /i2 . A set of quadratics equivalent to La Mottes are


0

0 R1
, u
i G1
i (i = 1, 2, . . .).
e
e
i u
0 R1
= y0 R1
But because e
e
y - (soln. vector) (r.h.s. vector)

,
G1 u
u
i i i i i

an equivalent set of quadratics is


0
y0 R1
y (soln. vector) (r.h.s. vector)

and
0

i (i = 1, 2, . . .).
i G1
u
i u

14

Illustration Of MIVQUE In Multivariate Model


We illustrate several of the principles regarding MIVQUE with the following design
No. of Progeny
Sires
Treatment 1 2 3
1
1 2 0
2
2 2 2

We assume treatments fixed with means t1 , t2 respectively. The three sires are a random
sample of unrelated sires from some population. Sire 1 had one progeny on treatment
1, and 2 different progeny on treatment 2, etc. for the other 2 sires. The sire and error
variances are different for the 2 treatments. Further there is a non-zero error covariance
22

between treatments. Thus we have to estimate g11 = sire variance for treatment l, g22
= sire variance for treatment 2, g12 = sire covariance, r11 = error variance for treatment
1, and r22 = error variance for treatment 2. We would expect no error covariance if the
progeny are from unrelated dams as we shall assume. The record vector ordered by sires
in treatments is [2, 3, 5, 7, 5, 9, 6, 8, 3].
We first use the basic La Motte method.
1 0 0 0

1 1 0

1 0

V1 pertaining to g11 =

0
0
0
0
0

0
0
0
0
0
0

0
0
0
0
0
0
0

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0

1
0
0
0
0

0
1
1
0
0
0

0
1
1
0
0
0
0

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0

0
0
0
1
1

0
0
0
0
0
1

0
0
0
0
0
1
1

0
0
0
0
0
0
0
1

0
0
0
0
0
0
0
1
1

0 0 0 1

0 0 0

0 0

V2 pertaining to g12 =

0 0 0 0

0 0 0

0 0

V3 pertaining to g22 =

V4 pertaining to r11 = dg [1, 1, 1, 0, 0, 0, 0, 0, 0].


V5 pertaining to r22 = dg [0, 0, 0, 1, 1, 1, 1, 1, 1].

23

Use prior values of g11 = 3, g12 = 2, g22 = 4, r11 = 30, r22 = 35. Only the proportionality
of these is of concern. Using these values

=
V

33

0
33

0
3
33

2
0
0
39

2
0
0
4
39

0
2
2
0
0
39

0
2
2
0
0
4
39

0
0
0
0
0
0
0
39

0
0
0
0
0
0
0
4
39

1 Vi V
1 we obtain the following values for Qi (i=1, . . . ,5). These are in
Computing V
the following table (times .001), only non-zero elements are shown.
Element
(1,1)
(1,4),(1,5)
(2,2),(2,3)
(2,6),(2,7)
(3,3)
(3,6),(3,7)
(4,4)
(4,5)
(5,5)
(6,6)
(6,7)
(7,7)
(8,8),(9,9)
(8,9)

Q1
Q2
Q3
Q4
Q5
.92872 -.17278 .00804 .92872 .00402
-.04320 .71675 -.06630 -.04320 -.03315
.78781 -.14657 .00682 .94946 .00341
-.07328 .66638 -.06135 -.03664 -.03068
.78781 -.14657 .00682 .94946 .00341
-.07328 .66638 -.06135 -.03664 -.03068
.00201 -.06630 .54698 .00201 .68165
.00201 -.06630 .54698 .00201 -.13467
.00201 -.06630 .54698 .00201 .68165
.00682 -.12271 .55219 .00341 .68426
.00682 -.12271 .55219 .00341 -.13207
.00682 -.12271 .55219 .00341 .67858
0
0 .54083
0 .67858
0
0 .54083
0 -.13775

We need y X o , o being a GLS solution. The GLS equations are


.08661 .008057
.008057
.140289

The solution is [3.2368, 6.3333]. Then

24

.229319
.862389

y X o = [1.2368, .2368, 1.7632, .6667, 1.3333, 2.6667,


.3333, 1.6667, 3.3333]0
= [I X(X0 V1 X) X0 V1 ]y T0 y.
Next we need T0 Vi T (i=1, . . . ,5) for the variance of y X o . These are
Element
T 0 V1 T T 0 V 2 T T 0 V 3 T T 0 V4 T T 0 V5 T
(1,1)
.84017 -.03573 .00182 .63013 .00091
(1,2),(1,3)
-.45611 -.00817 .00182 -.34208 .00091
(1,4),(1,5)
0 .64814 .00172
0 .00086
(1,6),(1,7)
0 -.64814 .02928
0 .01464
(1,8),(1,9)
(2,8),(2,9)
0
0 -.03101
0 -.01550
(3,8),(3,9)
(2,2),(3,3)
.24761 .01940 .00182 .68571 .00091
(2,3)
.24761 .01940 .00182 -.31429 .00091
(2,4),(2,5)
0 -.35186 .00172
0 .00086
(3,4),(3,5)
(2,6),(2,7)
0 .35186 .02928
0 .01464
(3,6),(3,7)
(4,4),(5,5),(6,6)
0
0 .66667
0 .83333
(7,7),(8,8),(9,9)
(4,5),(6,7),(8,9)
0
0 .66667
0 -.16667
(4,6),(4,7),(4,8)
(4,9),(5,6),(5,7)
0
0 -.33333
0 -.16667
(5,8),(5,9),(6,8)
(6,9),(7,8),(7,9)
Taking all combinations of tr Qi T0 Vj T for the expectation matrix and equating to (y
X o )0 Qi (y X o ) we have these equations to solve.

.00156056 .00029034
.00001350
.00117042 .000006752

.00372880 .00034435 .00021775 .00017218

.00435858
.00001012
.00217929

.00198893
.00000506

.00353862

25

g11
g12
g22
r11
r22

.00270080
.00462513
.00423360
.00424783
.01762701

This gives the solution [.500, 1.496, -2.083, 2.000, 6.333]. Note that the gij do not fall in
the parameter space, but this is not surprising with such a small set of data.
Next we illustrate with quadratics in u1 , . . . , u5 and e1 , . . . , e9 using the same priors
as before.
G11 = dg (1, 1, 0, 0, 0),

0 0 1 0 0

0 0 1 0

0 0 0
G12 =
,

0 0

0
G22

=
= dg (0, 0, 1, 1, 1), R

30 I 0
0 35 I

R11 = dg (1, 1, 1, 0, 0, 0, 0, 0, 0),


R22 = dg (0, 0, 0, 1, 1, 1, 1, 1, 1),

3 0 2 0 0

3 0 2 0

=
4 0 0
G

4 0

4
are
From these, the 3 matrices of quadratics in u

.25

0 .125
0
.25
0
.125
.0625
0
.0625

and

0
0
0
0
0

.0625

.25

0
.25

.25
0
.1875

0
.09375
.0625 .09375
.140625

are
Similarly matrices of quadratics in e
dg (.00111111, .00111111, .00111111, 0, 0, 0, 0, 0, 0),
and
dg (0, 0, 0, 1, 1, 1, 1, 1, 1)*.00081633.

26

0
0
0
0
0

0
0
0
0
.0625

0
.25
0
.1875

0
0
0
.140625

The mixed model coefficient matrix is

.1

0
.03333 .06667
0
0
0

.17143
0
0
.05714 .05714 .05714

.53333
0
.25
0
0

.56667
0
.25
0
.

.43214
0
0

.43214
0

.30714

The right hand side vector is


[.33333, 1.08571, .06667, .26667, .34286, .42857, .31429]0 .
The solution is
[3.2368, 6.3333, -.1344, .2119, -.1218, .2769, -.1550].
Let the last 5 rows of the inverse of the matrix above = Cu . Then
1 Z(G g11 + G g12 + G g22 )Z0 R
1 WC0
V ar(
u) = Cu W0 R
11
12
22
u
1 (R r11 + R r22 )R
1 WC0
+Cu W0 R
11
22
u

.006179 .006179
.003574 .003574 0

.006179
.003574
.003574 0

.002068 .002068 0
=

g11

.002068 0

0
.009191 .009191
.012667 .012667 0

.009191
.012667
.012667 0

.011580 .011580 0
+

g12

.011580 0

.004860 .001976
.010329 .004560 .005769

.004860 .004560
.010329 .005769

.021980 .010443 .011538


+

g22

.021980 .011538

.023076

.004634 .004634
.002681 .002681 0

.004634 .002681
.002681 0

.001551 .001551 0
+

r11

.001551 0

27

.002430 .000988
.005164 .002280 .002884

.002430 .002280
.005164 .002884

.010990 .005221 .005769


+
r22 .

.010990 .005769

.011538
e0 = [1.1024, 4488, 1.5512, .7885, 1.2115, 2.3898, .6102, 1.8217, 3.1783].
1 .
Let C be a g-inverse of the mixed model coefficient matrix, and T = I WCW0 R
Then
V ar(
e) = T(ZG11 Z0 g11 + ZG12 Z0 g12 + ZG22 Z0 g22 + R11 r11 + R22 r22 )T0
.702 .351 .351 .038 .038
.031
.038 0

.176
.176
.019
.019 .019 .019 0

.176
.019
.019 .019 .019 0

.002
.002 .002 .002 0

.002 .002 .002 0


=

.002
.002 0

.002 0

.131

0
0
0
0
0
0
0
0
0

g11

.065
.065
.489
.489 .489 .489 0 0
.033 .033 .245 .245
.245
.245 0 0

.033 .245 .245


.245
.245 0 0

.053 .053
.053
.053 0 0

.053
.053
.053 0 0
g12

.053 .053 0 0

.053 0 0

0 0
0

.006 .003 .003 .045 .045


.045
.045
0

.002
.002
.023
.023 .023 .023
0

.002
.023
.023
.023
.023
0

.447
.447 .226 .226 .221

.447 .226 .226 .221


+

.447
.447 .221

.447 .221

.442

28

0
0
0
.221
.221
.221
.221
.442
.442

g22

.527 .263 .263 .029 .029


.029
.029 0

.632 .369
.014
.014 .014 .014 0

.632
.014
.014 .014 .014 0

.002
.002 .002 .002 0

.002 .002 .002 0

.002
.002 0

.002 0

.003 .002 .002 .023 .023


.023
.023
0

.001
.001
.011
.011 .011 .011
0

.001
.011
.011 .011 .011
0

.723
.277
.113
.113
.110

.723 .113 .113 .110


+

.723 .277 .110

.723 .110

.721

0
0
0
0
0
0
0
0
0

0
0
0
.110
.110
.110
.110
.279
.721

r11

r22 .

Taking the traces of products of Q1 , Q2 , Q3 with V ar(


u) and of Q4 , Q5 with V ar(
e)
and
we get the same expectations as in the La Motte method. Also the quadratics in u
are the same as the La Motte quadratics in (y X o ).
e
If u6 is included, the same quadratics and expectations are obtained. If u6 is included
.
and we compute the following quadratics in u

0 0 0 1

0 0 0

0 0
0 dg (1 1 1 0 0 0) u
, u
0
u

0
1
0
0
0

0
0
1
0
0
0

u
,

0 dg (0, 0, 0, 1, 1, 1) u
and equate to expectations we obtain exactly the same
and u
estimates as in the other three methods. We also could have computed the following
rather than the ones used, namely
quadratics in e
0 dg (1 1 1 0 0 0 0 0 0)
0 dg (0 0 0 1 1 1 1 1 1) e
.
e
e and e
Also we could have computed an approximate MIVQUE by estimating r11 from within
sires in treatment 1 and r22 from within sires in treatment 2.
In most problems the error variances and covariances contribute markedly to computational labor. If no simplification of this computation can be effected, the La Motte
29

1 is
quadratics might be used in place of quadratics in e. Remember, however, that V
1 , G
1
usually a large matrix impossible to compute by conventional methods. But if R
1 Z + G
1 )1 are relatively easy to compute one can employ the results,
and (Z0 R
1 Z + G
1 )1 ZR
1 .
1 = R
1 R
1 Z(Z0 R
V
can be derived
As already discussed, in most genetic problems simple quadratics in u
usually of the form
0
iu
j or u
i A1 u
j.
u

Then these might be used with the La Motte ones for the rij rather than quadratics in e
o
o
for the rij . The La Motte quadratics are in (y X ), the variance of y X being
1 X) X0 V
1 ]V[I X(X0 V
1 X) X0 V
1 ]0 .
[I X(X0 V
6= V in general, and V should be written in terms of gij , rij for
Remember that V
purposes of taking expectations.

15

Other Types Of MIVQUE

The MIVQUE estimators of this chapter are translation invariant and unbiased. La
Motte also presented other estimators including not translation invariant biased estimators
and translation invariant biased estimators.

15.1

Not translation invariant and biased

The LaMotte estimator of this type is


0V

1 (y X)/(n
i = i (y X)
+ 2),
,
and V
are priors. This can also be computed as
where ,
0R
u

1 (y X)
1 (y X)]/(n
0 Z0 R
i = i [(y X)
+ 2),
where

1 Z + G
1 )1 Z0 R
1 (y X).
= (Z0 R
u
are used as priors.

The lower bound on MSE of i is 2i2 /(n + 2), when V,

30

15.2

Translation invariant and biased

An estimator of this type is


1 (y X o )/(n r + 2).
i = i (y X o )0 V
This can be written as
1 y]/(n r + 2).
1 y u
1 y ( o )0 X0 R
0 Z0 R
i [y0 R
and R = R.
The lower
are solution to mixed model equations with G = G
o and u
2

bound on MSE of i is 2i /(n r + 2) when V is used as the prior for V. The lower bound
on i for the translation invariant, unbiased MIVQUE is 2di , when di is the ith diagonal
th
of G1
element of G0 is trW0 Vi W0 Vi for
0 and the ij
1 V
1 X(X0 V
1 X) X0 V
1 .
W0 = V
The estimators of sections 15.1 and 15.2 have the peculiar property that i /j = i /j .
Thus the ratios of estimators are exactly proportional to the ratios of the priors used in
the solution.

16

Expectations Of Quadratics In

Let some g-inverse of (5.51) with priors on G and R be


C11 C12
0
C12 C22

C1
C2

Then
= C2 r, where r is the right hand vector of (5.51), and
0 Q)
= trQ V ar()

E(
0
= trQC2 [V ar(r)]C2 .
!
1 Z
X0 R
0 1
0 1
V ar(r) =
0 R1 Z G (Z R X Z R ZG)
GZ
+

1
X0 R
0R
1
GZ

1 X R
1 ZG).

R (R

(78)

can be obtained from the solution to


When R = Ie2 and G = G e2 ,
X0 X
X0 ZG
0
G Z X G Z0 ZG + G

31

X0 y
G Z0 y

(79)

In this case C2 is the last g rows of a g-inverse of (79).


V ar(r) =

X0 Z
Z0 Z
G
+

) 2
G (Z0 X Z0 ZG
e

X0 X
X 0 ZG
0
0
Z X G
Z ZG

32

e2 .

(80)

Chapter 12
REML and ML Estimation
C. R. Henderson
1984 - Guelph

Iterative MIVQUE

The restricted maximum likelihood estimator (REML) of Patterson and Thompson


(1971) can be obtained by iterating on MIVQUE, Harville (1977). Let the prior value of g
and r be denoted by g0 and r0 . Then compute MIVQUE and denote the estimates by g1
and r1 . Next use these as priors in MIVQUE and denote the estimates g2 and r2 . Continue
this process until gk+1 = gk and rk+1 = rk . Several problems must be recognized.
1. Convergence may be prohibitively slow or may not occur at all.
2. If convergence does occur, it may be to a local rather than a global maximum.
3. If convergence does occur, g and r may not fall in the parameter space.
We can check the last by noting that both Gk and Rk must be positive definite or positive
semidefinite at convergence, where Gk and Rk are

g11
g12
.
.

g12 ...
r11
r
g22 ...

12
and

.
.
.
.

r12 ...
r22 ...

.
.

For positive definitness or positive semidefiniteness all eigenvalues of Gk and Rk must be


non-negative. Writing a computer program that will guarantee this is not trivial. One
possibility is to check at each round, and if the requirement is not met, new starting
or R
at each round
values are chosen. Another possibility is to alter some elements of G

in which either G or R is not a valid estimator. (LRS note: Other possibilities are bending
in which eigenvalues are modified to be positive and the covariance matrix is reformed
using the new eigenvalues with the eigenvectors.)
Quadratic, unbiased estimators may lead to solutions not in the parameter space.
This is the price to pay for unbiasedness. If the estimates are modified to force them
into the parameter space, unbiasedness no longer can be claimed. What should be done
1

in practice? If the purpose of estimation is to accumulate evidence on parameters with


other research, one should report the invalid estimates, for otherwise the average of many
estimates will be biased. On the other hand, if the results of the analysis are to be
used immediately, for example, in BLUP, the estimate should be required to fall in the
in
parameter space. It would seem illogical for example, to reduce the diagonals of u
1

mixed model equations because the diagonals of G are negative.

An Alternative Algorithm For REML

An alternative algorithm for REML that is considerably easier per round of iteration
than iterative MIVQUE will now be described. There is, however, some evidence that
convergence is slower than in the iterative MIVQUE algorithm. The method is based on
the following principle. At each round of iteration find the expectations of the quadratics
and r are equal to g and r. This leads
under the pretense that the current solutions to g
to much simpler expectations. Note, however, that the first iterate under this algorithm
is not MIVQUE. This is the EM (expectation maximization) algorithm, Dempster et al.
(1977).
= g and r = r
From Henderson (1975a), when g
V ar(
u) = G C11 .
V ar(
e) = R WCW0 = R S.

(1)
(2)

V ar(
e) = Cov (
e, e0 ) = Cov[(y WCW0 R1 y), e0 ] = R WCW0 .

(3)

The proof of this is

A g-inverse of the mixed model coefficient matrix is


C00 C01
C10 C11

= C.

Note that if we proceed as in Section 11.5 we will need only diagonal blocks of WCW0
corresponding to the diagonal blocks of R .
V ar(
u) =

X X
i

Gij gij C11

(4)

ji

, and e
be the values computed
See Chapter 11, Section 3 for definition of G . Let C, S, u
th
for the k round of iteration. Then solve in the k+l round of iteration for values of g, r
from the following set of equations.
0 Q1 u
+ trQ1 C11
trQ1 G = u
..
.
2

0 Qb u
0 + trQb C11
trQb G = u
0 Qb+1 e
+ trQb+1 S
trQb+1 R = e
..
.

(5)

0 Qc e
+ trQc S.
trQc R = e
Note that at each round a set of equations must be solved for all elements of g , and
another set of all elements of r . In some cases, however, Q s can be found such that
only one element of gij (or rij ) appears on the left hand side of each equation of (5). Note
1 appears in a Qi the value of Qi changes at each round of iteration. The
also that if G
1 appearing in Qi for e
. Consequently it is desirable to find Qi that
same applies to R

isolate a single gij or rij in each left hand side of (5) and that are not dependent upon G

and R. This can be done for the gij in all genetic problems with which I am familiar.
The second algorithm for REML appears to have the property that if positive definite
G and R are chosen for starting values, convergence, if it occurs, will always be to positive
and R.
This suggestion has been made by Smith (1982).
definite G

ML Estimation

A slight change in the second algorithm for REML, presented in Section 2 results
1 Z + G1 )1 . In place
in an EM type ML algorithm. In place of C11 substitute (Z0 R
1 Z + G
1 )1 Z0 . Using a result reported by Laird and Ware
of WCW0 substitute Z(Z0 R
(1982) substituting ML estimates of G and R for the corresponding parameters in the
mixed model equations yields empirical Bayes estimates of u . As stated in Chapter 8
are also ML estimates of the conditional means of u .
the u
If one wishes to use the LaMotte type quadratics for REML and ML, the procedure
is as follows. For REML iterate on
trQj

1 X) X0 .
Vi i = (y X o )0 Qj (y X o ) + trQj X(X0 V

Qj are the quadratics computed by the LaMotte method described in Chapter 11. Also
this chapter describes the Vi . Further, o is a GLS solution.
ML is computed in the same way as REML except that
1 X) X0 is deleted.
trQj X(X0 V
The EM type algorithm converges slowly if the maximizing value of one or more parameters is near the boundary of the parameter space, eg.
i2 0. The result of Hartley and
Rao (1967) can be derived by this general EM algorithm.
3

Approximate REML

REML by either iterative MIVQUE or by the method of Section 2 is costly because


every round of iteration requires a g-inverse of the mixed model coefficient matrix. The
cost could be reduced markedly by iterating on approximate MIVQUE of Section 11.7.
Further simplification would result in the R = Ie2 case by using the residual mean square
of OLS as the estimate of e2 . Another possibility is to use the method of Section 2, but
with an approximate g-inverse and solution at each round of iteration. The properties of
such an estimation are unknown.

A Simple Result For Expectation Of Residual Sum


Of Squares

Section 11.13 shows that in a model with R = R e2 , V ar(ui ) = Gi e2 , and


0
Cov(ui , uj ) = 0, one of the quadratics that can be used is
0
y0 R1
y (soln. vector) (r.h.s. vector)

(6)

with equations written as (77) in Chapter 11. R and Gi are known. Then if = e2 /i2 ,
as is defined in taking expectations for the computations of Section 2, the expectation of
(6) is
[n rank (X)]e2 .

(7)

Biased Estimation With Few Iterations

What if one has only limited data to estimate a set of variances and covariances,
but prior estimates of these parameters have utilized much more data? In that case it
might be logical to iterate only a few rounds using the EM type algorithm for REML or
ML. Then the estimates would be a compromise between the priors and those that would
be obtained by iterating to convergence. This is similar to the consequences of Bayesian
estimation. If the priors are good, it is likely that the MSE will be smaller than those for
ML or REML. A small simulation trial illustrates this. The model assumed was
yij
X0
ni
V ar(e)
V ar(a)

=
=
=
=
=

Xij + ai + eij .
(3, 2, 5, 1, 3, 2, 3, 6, 7, 2, 3, 5, 3, 2).
(3, 2, 4, 5).
4 I,
I,
4

Cov(a, e0 ) = 0.
5000 samples were generated under this model and EM type REML was carried out with
starting values of e2 /a2 = 4, .5, and 100. Average values and MSE were computed for
rounds 1, 2, ..., 9 of iteration.
Starting Value e2 /a2 = 4

e2

a2

e2 /
a2
Rounds Av. MSE Av. MSE Av. MSE
1
3.98 2.37 1.00
.22 4.18
.81
2
3.93 2.31 1.04
.40 4.44 2.97
3
3.88 2.31 1.10
.70 4.75 6.26
4
3.83 2.34 1.16 1.08 5.09 10.60
5
3.79 2.39 1.22 1.48 5.47 15.97
6
3.77 2.44 1.27 1.86 5.86 22.40
7
3.75 2.48 1.31 2.18 6.26 29.90
8
3.74 2.51 1.34 2.43 6.67 38.49
9
3.73 2.53 1.35 2.62 7.09 48.17
In this case only one round appears to be best for estimating e2 /a2 .
Starting Value e2 /a2 = .5
a2

e2 /

a2
MSE Av. MSE Av. MSE
2.42 3.21 7.27 1.08 8.70
2.37 2.53 4.79 1.66 6.27
2.38 2.20 4.05 2.22 5.11
2.41 2.01 3.77 2.75 5.23
2.43 1.88 3.64 3.28 6.60
2.45 1.79 3.58 3.78 9.20
2.47 1.73 3.54 4.28 12.99
2.49 1.67 3.51 4.76 17.97
2.51 1.63 3.50 5.23 24.11

e2
Rounds
1
2
3
4
5
6
7
8
9

Av.
3.14
3.30
3.40
3.46
3.51
3.55
3.57
3.60
3.61

Starting Value
e2 /
a2

a2

e2
Rounds Av. MSE Av. MSE
1
4.76 4.40 .05
.91
2
4.76 4.39 .05
.90
3
4.76 4.38 .05
.90
4
4.75 4.37 .05
.90
5
4.75 4.35 .05
.90
6
4.75 4.34 .05
.90
7
4.75 4.32 .05
.90
8
4.74 4.31 .06
.89
9
4.74 4.28 .06
.89

= 100
a2

e2 /
Av. MSE
.99 9011
.98 8818
.97 8638
.96 8470
.95 8315
.94 8172
.92 8042
.91 7923
.90 7816

Convergence with this very high starting value of e2 /a2 relative to the true value of 4 is
very slow but the estimates were improving with each round.

The Problem Of Finding Permissible Estimates

Statisticians and users of statistics have for many years discussed the problem of
estimates of variances that are less than zero. Most commonly employed methods
of estimation are quadratic, unbiased, and translation invariant, for example ANOVA
estimators, Methods 1,2, and 3 of Henderson, and MIVQUE. In all of these methods there
is a positive probability that a solution to one or more variances will be negative. Strictly
speaking, these are not really estimates if we define, as some do, that an estimate must
lie in the parameter space. But, in general, we cannot obtain unbiasedness unless we are
prepared to accept such solutions. The argument used is that such estimates should
be reported because eventually there may be other estimates of the same parameters
obtained by unbiased methods, and then these can be averaged to obtain better unbiased
estimates.
Other workers obtain truncated estimates. That is, given estimates
12 , ...,
q2 , with
2
say
q2 < 0, the estimates are taken as
12 , ...,
q1
, 0. Still others revise the model so that
the offending variable is deleted from the model, and new estimates are then obtained of
the remaining variances. If these all turn out to be non-negative, the process stops. If
some new estimate turns negative, then that variance is dropped from the model and a
new set of estimates obtained.
These truncated estimators can no longer be defined as unbiased. Verdooren (1980) in
an interesting review of variance component estimation uses the terms permissible and
impermissible to characterize estimators. Permissible estimators are those in which the
solution is guaranteed to fall in the parameter space, that is all estimates of variances are

non-negative. Impermissible estimators are those in which there is a probability greater


than 0 that the solution will be negative.
If one insists on permissible estimators, why not then use some method that guarantees this property while at the same time invoking, if possible, other desirable properties
of estimators such as consistency, minimum mean squared error, etc.? Of course unbiasedness cannot, in general, be invoked. For example, an algorithm for ML, Henderson (1973),
guarantees a permissible estimator provided convergence occurs. A simple extension of
this method due to Harville (1977), yields permissible estimators by REML. The problem
of permissible estimators is especially acute in multiple trait models. For example, in a
two trait phenotypic model say
yij = i + e1j
y2j = 2 + e2j
we need to estimate
V ar

e1j
e2j

c11 c12
c12 c22

. c11 0, c22 0, c11 c22 c212 .

The last of these criteria insures that the estimated correlation between e1j and e2j falls
in the range -1 to 1. The literature reporting genetic correlation estimates contains many
cases in which the criteria are not met, this in spite of probable lack of reporting of many
other sets of computations with such results. The problem is particularly difficult when
there are more than 2 variates. Now it is not sufficient for all estimates of variances to
be non- negative and all pairs of estimated correlations to fall in the proper range. The
requirement rather is that the estimated variance-covariance matrix be either positive
definite or at worst positive semi-definite. A condition guaranteeing this is that all latent
roots (eigenvalues) be positive for positive definiteness or be non-negative for positive
semidefiteness. Most computing centers have available a good subroutine for computing
eigenvalues. We illustrate with a 3 3 matrix in which all correlations are permissible,
but the matrix is negative definite.

3 3 4

4 4
3

4
4 6
The eigenvalues for this matrix are (9.563, 6.496, -3.059), proving that the matrix is
negative definite. If this matrix represented an estimated G for use in mixed model
equations, one would add G1 to an appropriate submatrix, of OLS equations, but

G1

.042 .139
.011
=

.147
.126
,
.016

so one would add negative quantities to the diagonal elements, and this would make no
sense. If the purpose of variance-covariance estimation is to use the estimates in setting
up mixed model equations, it is essential that permissible estimators be used.
7

Another difficult problem arises when variance estimates are to be used in estimating
h . For example, in a sire model, an estimate of h2 often used is
2

2 = 4
h
s2 /(
s2 +
e2 ).
By definition 0 < h2 < 1, the requirement that
s2 > 0 and
e2 > 0 does not insure
2 is permissible. For this to be true the permissible range of
that h
s2 /
e2 is 0 to 31 . This
would suggest using an estimation method that guarantees that the estimated ratio falls
in the appropriate range.
In the multivariate case a method might be derived along these lines. Let some
translation invariant unbiased estimator be the solution to
C
v = q,
where q is a set of quadratics and Cv is E(q). Then solve these equations subject to a set
to fall in the parameter space, as a minimum, all eigenvalues
of inequalities that forces v
comprises the elements of the variance-covariance matrix.
0 where v

Method For Singular G

When G is singular we can use a method for EM type REML that is similar to

and the expectation is trGi V ar().


0 Gi ,
MIVQUE in Section 11.16. We iterate on

Under the pretense that G = G and R = R


GG
C22 .
= G
V ar()
C22 is the lower q 2 submatrix of a g-inverse of the coefficient matrix of (5.51), which has
rank, r(X) + r(G). Use a g-inverse with qrank(G) rows (and cols.) zeroed in the last q
rows (and cols.). Let G be a g-inverse of G with the same qrank(G) rows (and cols.)
zeroed as in C22 . For ML substitute (GZ0 R1 ZG) for C22 .

Chapter 13
Effects of Selection
C. R. Henderson
1984 - Guelph

Introduction

The models and the estimation and prediction methods of the preceding chapters
have not addressed the problem of data arising from a selection program. Note that
the assumption has been that the expected value of every element of u is 0. What if u
represents breeding values of animals that have been produced by a long-time, effective,
selection program? In that case we would expect the breeding values in later generations
to be higher than in the earlier ones. Consequently the expected value of u is not really 0
as assumed in the methods presented earlier. Also it should be noted that, in an additive
genetic model, Aa2 is a correct statement of the covariance matrix of breeding values if no
selection has taken place and a2 = additive genetic variance in an unrelated, non-inbred,
unselected population. Following selection this no longer is true. Generally variances are
reduced and the covariances are altered. In fact, there can be non-zero covariances for
pairs of unrelated animals. Further, we often assume for one trait that V ar(e) = Ie2 .
Following selection this is no longer true. Variances are reduced and non-zero covariances
are generated. Another potentially serious consequence of selection is that previously
uncorrelated elements of u and e become correlated with selection. If we know the new
first and second moments of (y, u) we can then derive BLUE and BLUP for that model.
This is exceedingly difficult for two reasons. First, because selection intensity varies from
one herd to another, a different set of parameters would be needed for each herd, but
usually with too few records for good estimates to be obtained. Second, correlation of u
with e complicates the computations. Fortunately, as we shall see later in this chapter,
computations that ignore selection and then use the parameters existing prior to selection
sometimes result in BLUE and BLUP under the selection model. Unfortunately, comparable results have not been obtained for variance and covariance estimation, although
there does seem to be some evidence that MIVQUE with good priors, REML, and ML
may have considerable ability to control bias due to selection, Rothschild et al. (1979).

An Example of Selection

We illustrate some effects of selection and the properties of BLUE, BLUP, and OLS
by a progeny test example. The progeny numbers were distributed as follows
Treatments
1
2
10
500
10
100
10
0
10
0

Sires
1
2
3
4

We assume that the sires were ranked from highest to lowest on their progeny averages in
Period 1. If that were true in repeated sampling and if we assume normal distributions,
one can write the expected first and second moments. Assume unrelated sires, e2 = 15,
s2 = 1 under a model,
yijk = si + pj + eijk .
With no selection

y11
y21
y31
y41
y12
y22

p1
p1
p1
p1
p2
p2

2.5 0
0
0

2.5 0
0

2.5 0
V ar =

2.5

1
0
0
0
1.03

0
1
0
0
0
1.15

With ordering of sires according to first records the corresponding moments are

1.628
.460
.460
1.628
.651
.184

+
+
+
+
+
+

p1
p1
p1
p1
p2
p2

1.229 .614 .395

.901 .590

.901

and

.262
.395
.614
1.229

.492
.246
.158
.105
.827

.246
.360
.236
.158
.098
.894

Further, with no ordering E(s) = 0, V ar(s) = I. With ordering these become

.651
.184
.184
.651

.797 .098 .063 .042

.744 .094 .063

and
.

.744 .098
.797

These results are derived from Teicheroew (1956), Sarhan and Greenberg (1956), and
Pearson (1903).
2

Suppose p1 = 10, p2 = 12. Then in repeated sampling the expected values of the
6 subclass means would be

11.628 12.651
10.460 12.184

9.540

8.372
Applying BLUE and BLUP, ignoring selection, to these expected data the mixed model
equations are

40

p1
0 10 10 10 10

600 500 100 0 0

p2

525
0 0 0 s1
=

125 0 0 s2

25 0 s3
s4
25

400.00
7543.90
6441.78
1323.00
95.40
83.72

The solution is [10.000, 12.000, .651, .184, .184, .651], thereby demonstrating
and s. The reason for this is discussed in Section 13.5.1.
unbiasedness of p
In contrast the OLS solution gives biased estimators and predictors. Forcing
0 as in the BLUP solution we obtain as the solution

si =

[10.000, 11.361, 1.297, .790, .460, 1.628].


Except for p1 these are biased. If OLS is applied to only the data of period 2, so1 so2 is
an unbiased predictor of s1 s2 . The equations in this case are
600 500 100
7543.90
po2

o
0 s1 = 6325.50
500 500
.
o
1218.40
s2
100 0 100

A solution is [0, 12.651, 12.184]. Then so1 so2 = .467 = E(s1 s2 ) under the selection
model. This result is equivalent to a situation in which the observations on the first period
are not observable and we define selection at that stage as selection on u, in which case
treating u as fixed in the computations leads to unbiased estimators and predictors. Note,
however, that we obtain invariant solutions only for functions that are estimable under a
fixed u model. Consequently p2 is not estimable and we can predict only the difference
between s1 and s2 .

Conditional Means And Variances

Pearson (1903) derived results for the multivariate normal distribution that are extremely useful for studying the selection problem. These are the results that were used in
3

the example in Section 13.2. We shall employ the notation of Henderson (1975a), similar
to that of Lawley (1943), rather than Pearsons, which was not in matrix notation. With
0
0
no selection [v1 v2 ] have a multivariate normal distribution with means,


1 2

v1
v2

, and V ar

C11 C12
0
C12 C22

(1)

Suppose now in conceptual repeated sampling v2 is selected in such a way that it has
mean = 2 + k and variance = Cs . Then Pearsons result is
v1
v2

Es

v1
v2

V ars

1 + C12 C1
22 k
2 + k

(2)

C11 C12 C0 C12 C12 C1


22 Cs
1 0
Cs
Cs C22 C12

(3)

1
where C0 = C1
22 (C22 Cs )C22 . Henderson (1975) used this result to derive BLUP
and BLUE under a selection model with a multivariate normal distribution of (y, u, e)
assumed. Let w be some vector correlated with (y, u). With no selection

V ar

y
u
e
w

y
u
e
w

X
0
0
d
V
GZ0
R
B0

ZG
G
0
0
Bu

(4)
R
0
R
0
Be

B
Bu
Be
H

(5)

and
V = ZGZ0 + R, B = ZBu + Be .
Now suppose that in repeated sampling w is selected such that E(w) = s 6= d, and
V ar(w) = Hs . Then the conditional moments are as follows.

y
X + Bt

Bu t
E u =
,
w
s

(6)

where t = H1 (s d).
0

V BH0 B
ZG BH0 B0 BH1 Hs
y
0

V ar u = GZ0 BH0 B0 G Bu H0 Bu Bu H1 Hs ,
0
w
Hs H1 B0
Hs H1 Bu
Hs

where H0 = H1 (H Hs )H1 .
4

(7)

BLUE And BLUP Under Selection Model

To find BLUE of K0 and BLUP of u under this conditional model, find linear
functions that minimize diagonals of V ar(K0 ) and variance of diagonals of (
u u)
subject to
E(K0 o ) = K0 and E(
u) = Bu t.
This is accomplished by modifying GLS and mixed model equations as follows.
X0 V1 X X0 V1 B
B0 V1 X B0 V1 B

o
to

X0 V1 y
B0 V1 y

(8)

BLUP of k0 + m0 u is
k0 o + m0 u to + m0 GZ0 (y X o Bto ).
Modified mixed model equations are
X0 R1 X X0 R1 Z
X0 R1 Be
0 1

0 1
1
Z0 R1 Be G1 Bu
ZR X ZR Z+G

0
0
0
0
0
Be R1 X Be R1 Z Bu G1 Be R1 Be + Bu G1 Bu

o
0

0
o
.
u = X0 R1 y Z0 R1 y Be R1 y
o
t

(9)

BLUP of k0 + m0 u is k0 o + m0 uo . In equations (8) and (9) we use uo rather than u


because the solution is not always invariant. It is necessary therefore to examine whether
the function is predictable. The sampling and prediction error variances come from a
g-inverse of (8) or (9). Let a g-inverse of the matrix of (8) be
C11 C12
0
C12 C22

then
V ar(K0 o ) = K0 C11 K.

(10)

Let a g-inverse of the matrix of (9) be

C11 C12 C13


0

C12 C22 C23 ,


0
0
C13 C23 C33
Then
V ar(K0 o )
u)
Cov(K0 o , u
V ar(
u u)
0 o
0)
Cov(K , u
V ar(
u)

=
=
=
=
=

K0 C11 K.
K0 C12 .
C22 .
0
K0 C13 Bu .
0
0
0
G C22 + C23 Bu + Bu C23 Bu H0 Bu .
5

(11)
(12)
(13)
(14)
(15)

Note that (10), ..., (13) are analogous to the results for the no selection model, but (14)
and (15) are more complicated. The problems with the methods of this section are that
w may be difficult to define and the values of Bu and Be may not be known. Special cases
exist that simplify the problem. This is true particularly if selection is on a subvector
of y, and if estimators and predictors can be found that are invariant to the value of
associated with the selection functions.

Selection On Linear Functions Of y

Suppose that whatever selection has occurred has been a consequence of use of the
record vector or some subvector of y. Let the type of selection be described in terms of
a set of linear functions, say L0 y, such that
E(L0 y) = L0 X + t,
where t 6= 0. t would be 0 if there were no selection.
V ar(L0 y) = Hs .
Let us see how this relates to (9).
Bu = GZ0 L, Be = RL, H = L0 VL.
Substituting these values in (9) we obtain
X0 R1 y
o
X0 R1 X
X0 R1 Z
X0 L
0 1

0 1

0 1
1
= Z R y .
0 u
ZR X ZR Z+G
0
0
L0 y

LX
0
L VL

5.1

(16)

Selection with L0 X = 0

is a solution to the mixed


An important property of (16) is that if L0 X = 0, then u
model equations assuming no selection. Thus we have the extremely important result
that whenever L0 X = 0, BLUE and BLUP in the selection model can be computed by
using the mixed model equations ignoring selection. Our example in Section 2 can be
formulated as a problem with L0 X = 0. Order the observations by sires within periods.
Let
0
0
0
0
0
0
y0 = [
y11. , y21. , y31. , y41. , y12. , y22. ].
According to our assumptions of the method of selection
y11. > y21. > y31. > y41. .
6

Based on this we can write

110 110 0620


0
0
0
0

0
L = 010
110 110 0610
0
0
0
0
020
110 110 0600
where
1010 denotes a row vector of 10 one0 s.
00620 denotes a null row vector with 620 elements, etc.
It is easy to see that L0 X = 0, and that explains why we obtain unbiased estimators
and predictors from the solution to the mixed model equations.
Let us consider a much more general selection method that insures that L0 X = 0.
Suppose in the first cycle of selection that data to be used in selection comprise a subvector
of y, say ys . We know that Xs , consisting of such fixed effects as age, sex and season,
causes confusion in making selection decisions, so we adjust the data for some estimate of
Xs , say Xs o so the data for selection become ys Xs o . Suppose that we then evaluate
0
the ith candidate for selection by the function ai (ys Xs o ). There are c candidates for
selection and s of them are to be selected. Let us order the highest s of the selection
functions with labels 1 for the highest, 2 for the next highest, etc. Leave the lowest c s
unordered. Then the animals labelled 1, ..., s are selected, and there may, in addition,
be differential usage of them subsequently depending upon their rank. Now express these
selection criteria as a set of differences, of a0 (ys Xs o ),
1 2, 2 3, (s 1) s, s (s + 1), ..., s c.
Because Xs o is presumably a linear function of y these differences are a set of linear
functions of y, say L0 y. Now suppose o is computed in such a way that E(Xs o ) = Xs
in a no selection model. (It need not be an unbiased estimator under a selection model,
but if it is, that creates no problem). Then L0 X will be null, and the mixed model
equations ignoring selection yield BLUE and BLUP for the selection model. This result
and R
will result in biases
is correct if we know G and R to proportionality. Errors in G
and R

under a selection model, the magnitude of bias depending upon how seriously G
depart from G and R and upon the intensity of selection. The result also depends upon
normality. The consequences of departure from this distribution are not known in general,
but depend upon the form of the conditional means.
We can extend this description of selection for succeeding cycles of selection and
still have L0 X = 0. The results above depended upon the validity of the Pearson result
and normality. Now with continued selection we no longer have the multivariate normal
distribution, and consequently the Pearson result may not apply exactly. Nevertheless
with traits of relatively low heritability and with a new set of normally distributed errors
for each new set of records, the conditional distribution of Pearson may well be a suitable
approximation.
7

With Non-Observable Random Factors

The previous section deals with strict truncation selection on a linear function of
records. This is not entirely realistic as there certainly are other factors that influence
the selection decisions, for example, death, infertility, undesirable traits not recorded
as a part of the data vector, y. It even may be the case that the breeder did have
available additional records and used them, but these were not available to the person or
organization attempting to estimate or predict. For these reasons, let us now consider a
different selection model, the functions used for making selection decision now being
0

ai (y X o ) + i
where i is a random variable not observable by the person performing estimation and
prediction, but may be known or partially known by the breeder. This leads to a definition
of w as follows
w = L0 y + .
Cov(y, w0 )
Cov(u, w0 )
Cov(e, w0 )
V ar(w)

=
=
=
=

B = VL + C, where C = Cov(y, 0 ).
Bu = GZ0 L + Cu , where Cu = Cov(u, 0 )
Be = RL + Ce , where Ce = Cov(e, 0 ).
L0 VL + L0 C + C0 L + C , where C = V ar().

(17)
(18)
(19)
(20)

Applying these results to (9) we obtain the modified mixed model equations below)
X0 R1 X
X0 R1 Z
X0 L + X0 R1 Ce
0 1
Z0 R1 Z + G1
ZR1 Ce G1 Cu

ZR X
0
0
0
1
1 0
1
0

L X + Ce R X Ce R Z + Cu G

X0 R1 y
o
0 1


= ZR y
,
u
0
0
1

L y + Ce R y

(21)

where = L0 VL + Ce R1 Ce + Cu G1 Cu + L0 C + C0 L.
Now if L0 X = 0 and if is uncorrelated with u and e, these equations reduce
to the regular mixed model equations that ignore selection. Thus the non-observable
variable used in selection causes no difficulty when it is uncorrelated with u and e. If
the correlations are non-zero, one needs the magnitudes of Ce , Cu to obtain BLUE and
BLUP. This could be most difficult to determine. The selection models of Sections 5 and
6 are described in Henderson (1982).

Selection On A Subvector Of y

Many situations exist in which selection has occurred on y1 , but y2 is unselected,


where the model is
y1
y2

X1
X2

e1
e2

V ar

Z1 u
Z2 u

R11 R12
R12 R22

e1
e2

Presumably y1 are data from earlier generations. Suppose that selection which has occurred can be described as
!
y1
0
0
L y = (M 0)
.
y2
Then the equations of (16) become
0

X0 R1 y
o
X0 R1 X
X0 R1 Z
X1 M

0 1

= Z0 R1 y
0

Z R X Z0 R1 Z + G1
u

M 0 y1
M0 X1
0
M0 V11 M

(22)

Then if M0 X1 = 0, unmodified mixed model equations yield unbiased estimators and


predictors. Also if selection is on M0 y1 plus a non-observable variable uncorrelated with
u and e and M0 X1 = 0, the unmodified equations are appropriate.
Sometimes y1 is not available to the person predicting functions of and u. Now if
we assume that R12 = 0,
0

E(y2 | M0 y1 ) = Z2 GZ1 Mk.


0
E(u | M0 y1 ) = GZ1 Mk,

where
k = (M0 V11 M)1 t,
0

t being the deviation of mean of M0 y1 from X1 . If we solve for o and uo in the equations
(23) that regard u as fixed for purposes of computation, then
E[K0 o + T0 uo ] = K0 + E[T0 u | M0 y1 ]
provided that K0 + T0 u is estimable under a fixed u model.
0

1
X2 R1
22 X2 X2 R22 Z2
0
0
1
1
Z2 R22 X2 Z2 R22 Z2

o
uo

X2 R1
22 y2
0
1
Z2 R22 y2

(23)

This of course does not prove that K0 o + T0 uo is BLUP of this function under M0 y1
selection and utilizing only y2 . Let us examine modified mixed model equations regarding
y2 as the data vector and M0 y1 = w. We set up equations like (21).
0

Be = Cov[e2 , y1 M] = 0 if we assume R12 = 0.


0
0
Bu = Cov(u, y1 M) = GZ1 M.
Then the modified mixed model equations become
0

X2 R1
X2 R1
0
o
X2 R1
22 X2
22 Z2
22 y2
0
0
o
0 1

0 1
1
1
Z1 M
u = Z2 R22 y2 .
Z2 R22 X2 Z2 R22 Z2 + G
0

0
0
M0 Z1
M0 Z1 GZ1 M

(24)

A sufficient set of conditions for the solution to o and uo in these equations being equal
to those of (23) is that M0 = I and Z1 be non-singular. In that case if we absorb we
obtain the equations of (23).
Now it seems implausible that Z1 be non-singular. In fact, it would usually have
1 be the mean
more rows than columns. A more realistic situation is the following. Let y
1 vector. Then the model for y
1 is
of smallest subclasses in the y
1 + Z
1 u + e1 .
1 = X
y
See Section 1.6 for a description of such models. Now suppose selection can be described
as I
y1 . Then
Be = 0 if R12 = 0, and
1.
Bu = Z
Then a sufficient condition for GLS using y2 only and computing as though u is fixed
1
to be BLUP under the selection model and regarding y2 as that data vector is that Z
be non-singular. This might well be the case in some practical situations. This is the
selection model in our sire example.

Selection On u

Cases exist in animal breeding in which the data represent observations associated
with u that have been subject to prior selection, but with the data that were used for such
selection not available. Henderson (1975a) described this as L0 u selection. If no selection
on the observable y vector has been effected, BLUE and BLUP come from solution to
equations (25).
X0 R1 X
X0 R1 Z
0
o
X0 R1 y
0 1

L
Z R X Z0 R1 Z + G1
uo = Z0 R1 y
0
L0
L0 GL

10

(25)

These reduce to (26) by absorbing .


X0 R1 X X0 R1 Z
Z0 R1 X Z0 R1 Z + G1 L(L0 GL)1 L0

o
uo

X0 R1 y
Z0 R1 y

(26)

since the solution may not be unique, in which


The notation uo is used rather than u
case we need to consider functions of uo that are invariant to the solution. It is simple
to prove that K0 o + M0 uo is an unbiased predictor of K0 + M0 u, where o and uo are
some solution to (27) and this is an estimable function under a fixed u model
X0 R1 X X0 R1 Z
Z0 R1 X Z0 R1 Z

o
uo

X0 R1 y
Z0 R1 y

(27)

A sufficient condition for this to be BLUP is that L = I. The proof comes by substituting
I for L in (26). In sire evaluation L0 u selection can be accounted for by proper grouping.
Henderson (1973) gave an example of this for unrelated sires. Quaas and Pollak (1981)
extended this result for related sires. Let G = As2 . Write the model for progeny as
y = Xh + ZQg + ZS + e,
where h refers to fixed herd-year-season and g to fixed group effects. Then it was shown
that such grouping is equivalent to no grouping, defining L = G1 Q, and then using (25).
We illustrate this method with the following data.
group sire ni
1
1
2
2
3
3
1
2
4
2
5
3
3
6
1
7
2
8
1
9
2

1 0 .5 .5

1 0 0

1 .25

A =

0
.5
0
0
1

.25
0
.5
.125
0
1

11

yi
10
12
7
6
8
3
5
2
8

.25
0
.125
.5
0
.0625
1

0
.25
0
0
.5
0
0
1

.125
0
.25
.0625
0
.5
.03125
0
1

Assume a model yijk = + gi + sij + eijk . Let e2 = 1, s2 = 121 , then G = 121 A.


The solution to the mixed model equations with dropped is
= (4.8664, 2.8674, 2.9467),
g
s = (.0946, .1937, .1930, .0339, .1350, .1452, .0346, .1192, .1816).
The sire evaluations are gi + sij and these are (4.961, 4.673, 5.059, 2.901, 2.732, 3.092,
2.912, 2.827, 3.128).

Q0

1 1 1 0 0 0 0 0 0

= 0 0 0 1 1 0 0 0 0 .
0 0 0 0 0 1 1 1 1

This gives

L0

12 16 12 8 8 8
0
0 0

1
0 20 20
0 8 8 0
= 8 8
= G Q,
0
0 8 8 8 12 16 16 8

and

40 16 8

40 16
L0 GL =
.
52
Then the equations like (25) give a solution
o = 2.9014,
so = (2.0597, 1.7714, 2.1581, 0, .1689, .1905, .0108, .0739, .2269),
= (1.9651, .0339, .0453).
The sire evaluation is o + soi and this is the same as when groups were included.

Inverse Of Conditional A Matrix

In some applications the base population animals are not a random sample from some
population, but rather have been selected. Consequently the additive genetic variancecovariance matrix for these animals is not a2 I, where a2 is the additive genetic variance in
2
2
the population from which these animals were were taken. Rather it is As a
, where a
2
6= a in general. If the base population had been a random sample from some population,
the entire A matrix would be
I
A12
0
A12 A22
12

(28)

The inverse of this can be found easily by the method described by Henderson (1976).
Denote this by
C11 C12
0
C12 C22

(29)

If the Pearson result holds, the A matrix for this conditional population is
!

As
As A12
0
0
A12 As A22 A12 (I As )A12

(30)

The inverse of this matrix is


Cs C12
0
C12 C22

(31)
0

where Cs = A1
s C12 A12 ,

(32)

and C12 , C22 are the same as in (29)


Note that most of the elements of the inverse of the conditional matrix (31) are the
same as the elements of the inverse of the unconditional matrix (29). Thus the easy
method for A1 can be used, and the only elements of the unconditional A needed are
those of A12 . Of course this method is not appropriate for the situation in which As is
singular. We illustrate with

1.0 0

1.0

unconditional A =

.2
.1
1.1

.3
.2
.3
1.2

.1
.2
.5
.2
1.3

The first 2 animals are selected so that


As =

.7 .4
.4
.8

Then by (30) the conditional A is

.7 .4 .1
.13

.8 0
.04

1.07
.25

1.117

.01
.12
.47
.151
1.273

The inverse of the unconditional A is

1.103168

.064741 .136053 .251947


1.062405
.003286 .170155
1.172793 .191114
.977683

13

.003730
.143513
.411712
.031349
.945770

The inverse of the conditional A is

2.103168 1.064741 .136053 .251947

1.812405
.003286 .170155

1.172793 .191114

.977683

A1
=
s

2.0 1.0
1.0 1.75

.2 .1

= .3 .2 ,
.1 .2

and
A1
s

.136053 .251947 .003730


.003286 .170155 .143513

, C12 =

A12

.003730
.143513
.411712
.031349
.954770

C12 A12 =

2.103168 1.064741
1.812405

which checks with the upper 2 2 submatrix of the inverse of conditional A.

10

Minimum Variance Linear Unbiased Predictors

In all previous discussions of prediction in both the no selection and the selection
model we have used as our criteria linear and unbiased with minimum variance of the
prediction error. That is, we use a0 y as the predictor of k0 + m0 u and find a that
minimizes E(a0 y k0 m0 u)2 subject to the restriction that E(a0 y) = k0 + E(m0 u).
This is a logical criterion for making selection decisions. For other purposes such as
estimating genetic trend one might wish to minimize the variance of the predictor rather
than the variance of the prediction error. Consequently in this section we shall derive a
predictor of k0 + m0 u, say a0 y, such that E(a0 y) = k0 + E(m0 u) and has minimum
variance. For this purpose we use the L0 y type of selection described in Section 5. Let
E(L0 y) = L0 X + t, t6=0.
V ar(L0 y) = Hs 6=L0 VL.
Then
E(y | L0 y) = X + VL(L0 VL)1 t X + VLd.
E(u | L0 y) = GZ0 L(L0 VL)1 t GZ0 Ld.
V ar(y | L0 y) = V VL(L0 VL)1 (L0 V Hs )(L0 VL)1 L0 V Vs .
Then we minimize V ar(a0 y) subject to E(a0 y) = k0 + m0 GZ0 Ld. For this expectation
to be true it is required that
X0 a = k and L0 Va = L0 ZGm.
14

Therefore we solve equations (33) for a.

Vs X VL
a
0

0 0
k
X0
=

0
0
LV 0 0

L ZGm

(33)

Let a g-inverse of the matrix of (33) be

C11 C12 C13


0

C12 C22 C23 .


0
0
C13 C23 C33
Then

(34)

a0 = k0 C12 + m0 GZ0 LC13 .


But it can be shown that a g-inverse of the matrix of (35) gives the same values of
C11 , C12 , C13 . These are subject to L0 X = 0,
C11 = V1 V1 X(X0 V1 X) X0 V1 L(L0 V1)1 L0 .
C12 = V1 X(X0 V1 X)1 , C13 = L(L0 VL)1 L0 .
Consequently we can solve for a in (35), a simpler set of equations than (33).

V
X VL
a
0

0
X
0
0

k
=

.
0
0
LV 0 0

L ZGm

(35)

By techniques described in Henderson (1975) it can be shown that


a0 y = k0 o + m0 GZ0 Lto
where o , to are a solution to (36).
X0 R1 X
X0 R1 Z
0
o
X0 R1 y
0 1

= Z0 R1 y
0 u
Z R X Z0 R1 Z + G1
.
0
0
o
0
0
L VL
t
Ly

(36)

Thus o is a GLS solution ignoring selection, and to = (L0 VL)1 L0 y. It was proved in
Henderson (1975a) that
V ar(K0 o ) = K0 (X0 V1 X) K = K0 C11 K,
Cov(K0 o , t0 ) = 0, and
V ar(t) = (L0 VL)1 Hs (L0 VL)1 .
, is
Thus the variance of the predictor, K0 o + m0 u
K0 C11 K + M0 GZ0 L(L0 VL)1 Hs (L0 VL)1 L0 ZGM.
15

(37)

In contrast to BLUP under the L0 y (L0 X = 0) selection model, minimization of


prediction variance is more difficult than minimization of variance of prediction error
because the former requires writing a specific L matrix, and if the variance of the predictor
is wanted, an estimate of V ar(L0 y) after selection is needed.
We illustrate with the following example with phenotypic observations in two generations under an additively genetic model.
Time
1
y11
y12
y13

2
y24
y25
y26

The model is
yij = ti + aij + eij .

1 0 0 .5 .5

1 0 0 0

1 0 0
V ar(a) =

1 .25

0
.5
0
0
0
1

This implies that animal 1 is a parent of animals 4 and 5, and animal 2 is a parent of
animal 6. Let V ar(e) = 2I6 . Thus h2 = 1/3. We assume that animal 1 was chosen to have
2 progeny because y11 > y12 . Animal 2 was chosen to have 1 progeny and animal 3 none
because y12 > y13 . An L matrix describing this type of selection and resulting in L0 X = 0
is
!
1 1
0 0 0 0
.
0
1 1 0 0 0
Suppose we want to predict
31 (1 1 1 1 1 1) u.
This would be an estimate of the genetic trend in one generation. The mixed model

16

coefficient matrix modified for L0 y is

1.5

0
1.5

.5
0
2.1667

.5
0
0
1.8333

.5
0
0
0
0
0
0
.5
.5
.5
0
0
0 .6667 .6667
0
0
0
0
0
0
.6667 0
0
1.5
0
0
0
0
0
1.8333
0
0
0
0
1.8333
0
0
0
1.8333 0
0
6 3
6

The right hand sides are X0 R1 y Z0 R1 y L0 y


is found that BLUP of m0 u is

0

. Then solving for functions of y it

(.05348 .00208 .05556 .00623 .00623 .01246)y.


In contrast the predictor with minimum variance is
[ .05556 0 .05556 0 0 0 ]y.
This is a strange result in that only 2 of the 6 records are used. The variances of these
two predictors are .01921 and .01852 respectively. The difference between these depends
upon Hs relative to L0 VL. When Hs = L0 VL, the variance is .01852.
As a matter of interest suppose that t is known and we predict using y X. Then
the BLUP predictor is
(.02703 .06667 .11111.08108.08108.08108)(y X)
with variance = .09980. Note that the variance is larger than when X is unknown. This is
a consequence of the result that in both BLUP and in selection index the more information
available the smaller is prediction error variance and the larger is the variance of the
) is equal to V ar(m0 u)
predictor. In fact, with perfect information the variance of (m0 u
) is 0. The minimum variance predictor is the same when
and the variance of (m0 u m0 u
t is known as when it is unknown. Now we verify that the predictors are unbiased in the
selection model described. By the Pearson result for multivariate normality,

1
E(ys ) =
18

12
6
6
6

6 12

2
1

2
1
1
1

d1
d2

17

1
1
1
0
0
0

0
0
0
1
1
1

t1
t2

and

1
E(us ) =
18

4
2
2
2

2 4

2
1

2
1
1
1

d1
d2

It is easy to verify that all of the predictors described have this same expectation. If
t = were known, a particularly simple unbiased predictor is
31 (1 1 1 1 1 1) (y X).
But the variance of this predictor is very much larger than the others. The variance is
1.7222 when Hs = L0 VL.

18

Chapter 14
Restricted Best Linear Prediction
C. R. Henderson
1984 - Guelph

Restricted Selection Index

Kempthorne and Nordskog (1959) derived restricted selection index. The model and
design assumed was that the record on the j th trait for the ith animal is
0

yij = xij + uij + eij .


Suppose there are n animals and t traits. It is assumed that every animal has observations
on all traits. Consequently there are nt records. Further assumptions follow. Let ui and
ei be the vectors of dimension t l pertaining to the ith animal. Then it was assumed
that
V ar(ui ) = G0 for all i = 1, . . . , n,
V ar(ei ) = R0 for all i = 1, . . . , n,
0
Cov(ui , uj ) = 0 for all i6=j,
0

Cov(ei , ej ) = 0 for all i6=j,


0

Cov(ui , ej ) = 0 for all i, j.


Further u, e are assumed to have a multivariate normal distribution and is assumed
known. This is the model for which truncation on selection index for m0 ui maximizes the
expectation of the mean of selected m0 ui , the selection index being the conditional mean
and thus meeting the criteria for Cochrans (1951) result given in Section 5.1.
Kempthorne and Nordskog were interested in maximizing improvement in m0 ui but
0
0
at the same time not altering the expected value of C0 ui in the selected individuals, C0
being of dimension sxt and having s linearly independent rows. They proved that such a
restricted selection index is
a0 y ,
where y = the deviations of y from their known means and a is the solution to
G0 + R0 G0 C
C 0 G0
0

G0 m
0

(1)

This is a nice result but it depends upon knowing and having unrelated animals and
the same information on each candidate for selection. An extension of this to related
animals, to unequal information, and to more general designs including progeny and sib
tests is presented in the next section.

Restricted BLUP
We now return to the general mixed model
y = X + Zu + e,

where is unknown, V ar(u) = G, V ar(e) = R and Cov(u, e0 ) = 0. We want to predict


k0 + m0 u by a0 y where a is chosen so that a0 y is invariant to , V ar(a0 y k0 m0 u) is
minimum, and the expected value of C0 u given a0 y = 0. This is accomplished by solving
.
mixed model equations modified as in (2) and taking as the prediction k0 o + m0 u
X0 R1 X
X0 R1 Z
X0 R1 ZGC
0 1

0 1
1
ZR Z+G
Z0 R1 ZGC
ZR X

0 1
0 1
0 1
0
0
0
C GZ R X C GZ R Z
C GZ R ZGC

X0 R1 y
o
0 1

= ZR y
.
u
0 1
0
C GZ R y

(2)

= 0. Premultiply the second equation by C0 G and subtract


It is easy to prove that C0 u
= 0.
from this the third equation. This gives C0 u

Application

Quaas and Henderson (1977) presented computing algorithms for restricted BLUP
in an additively genetic model and with observations on a set of correlated animals. The
algorithms permit missing data on some or all observations of animals to be evaluated.
Two different algorithms are presented, namely records ordered traits within animals and
records ordered animals within traits. They found that in this model absorption of
results in a set of equations with rank less than r + q, the rank of regular mixed model
equations, where r = rank (X) and q = number of elements in u. The linear dependencies
is unique, but care needs to
relate to the coefficients of but not of u. Consequently u
o
0
0
be exercised in solving for and in writing K + m u, for K0 must now be estimable
under the augmented mixed model equations.

Chapter 15
Sampling from finite populations
C. R. Henderson
1984 - Guelph

Finite e

The populations from which samples have been drawn have been regarded as infinite
in preceding chapters. Thus if a random sample of n is drawn from such a population
with variance 2 , the variance-covariance matrix of the sample vector is In 2 . Suppose in
contrast, the population has only t elements and a random sample of n is drawn. Then
the variance-covariance matrix of the sample is

1/(t 1)

1
...
1/(t 1)

2.

(1)

If t = n, that is, the sample is the entire population, the variance-covariance matrix is
singular. As an example, suppose that the population of observations on a fixed animal
is a single observation on each day of the week. Then the model is
yi = + e i .

V ar(ei ) =

(2)

1
1/6 1/6
1/6
1
1/6
2
..
..
..
.
.
.
.
1/6 1/6
1

Suppose we take n random observations. Then BLUE of is

= y,
and
V ar(
) =

7n 2
,
6n

which equals 0 if n = 7. In general, with a population size, t, and a sample of n,


tn
2,
n(t 1)

V ar (
) =
1

(3)

which goes to 2 /n when t goes to infinity, the latter being the usual result for a sample
of n from an infinite population with V ar = I 2 .
Suppose now that in this same problem we have a random sample of 3 unrelated
animals with 2 observations on each and wish to estimate and to predict a when the
model is
yij = + ai + eij ,
V ar(a) = I3 ,

V ar(e) =

1
1/6
0
0
0
0
1/6
1
0
0
0
0
0
0
1
1/6
0
0
0
0
1/6
1
0
0
0
0
0
0
1
1/6
0
0
0
0
1/6
1

Then

R1 =

6
1
0
0
0
0

1
6
0
0
0
0

0
0
0
0
1
6

0
0
6
1
0
0

0
0
1
6
0
0

0
0
0
0
6
1

a1
a2
a3

= .2

/35.

The BLUP equations are

1.2 .4 .4 .4
.4 1.4 0
0
.4 0 1.4 0
.4 0
0 1.4

y..
y1 .
y2 .
y3 .

Finite u

We could also have a finite number of breeding values from which a sample is drawn.
If these are unrelated and are drawn at random from a population with t animals

1/t

1
..

V ar(a) =

1/t

a2 .

(4)

If q are chosen not at random, we can either regard the resulting elements of a as fixed
or we may choose to say we have a sample representing the entire population. Then

V ar(a) =

1/q

1
...
1/q

1
2

2
a
,

(5)

2
probably is smaller than a2 . Now G is singular, and we need to compute BLUP
where a
by the methods of Section 5.10. We would obtain exactly the same results if we assume
a fixed but with levels that are unpatterned, and we then proceed to biased estimation
as in Chapter 9, regarding the average values of squares and products of elements of a as

...

P =

1/q

1
1/q

2
a
.

(6)

Infinite By Finite Interactions

Much controversy has surrounded the problem of an appropriate model for the interactions in a 2 way mixed model. One commonly assumed model is that the interactions
have V ar = I 2 . An alternative model is that the interactions in a row (rows being
random and columns fixed) sum to zero. Then variance of interactions, ordered columns
in rows, is

B 0 0
0 B 0
2
.. ..
..

. .
.
0 0
B

(7)

where B is c c with 1s on the diagonal and 1/(c 1) on all off- diagonals, where c =
number of columns. We will show in Chapter 17 how with appropriate adjustment of r2
(= variance of rows) we can make them equivalent models. See Section 1.5 for definition
of equivalence of models.

Finite By Finite Interactions

Suppose that we have a finite population of r rows and c columns. Then we might assume that the variance-covariance matrix of interactions is the following matrix multiplied
by 2 .
All diagonals = 1.
Covariance between interactions in the same row = 2 /(c 1).
Covariance between interactions in the same column = 2 /(r 1).
Covariance between interations in neither the same row nor column =
2 /(r 1)(c 1).
3

If the sample involves r rows and c columns both regarded as fixed, and there is no assumed
pattern of values of interactions, estimation biased by interactions can be accomplished by
regarding these as pseudo-random variables and using the above variances for elements
of P, the average value of squares and products of interactions. This methodology was
described in Chapter 9.

Finite, Factorial, Mixed Models

In previous chapters dealing with infinite populations from which u is drawn at random as well as infinite subpopulations from which subvectors ui are drawn the assumption
has been that the expectations of these vectors is null. In the case of a population with
finite levels we shall assume that the sum of all elements of their population = 0. This results in a variance- covariance matrix with rank t1, where t = the number of elements
in the population. This is because every row (and column) of the variance-covariance matrix sums to 0. If the members of a finite population are mutually unrelated (for example,
a set of unrelated sires), the variance-covariance matrix usually has d for diagonal elements and d/(t 1) for all off-diagonal elements. If the population refers to additive
genetic values of a finite set of related animals, the variance-covariance matrix would be
Aa2 , but with every row (and column) of A summing to 0 and a2 having some value
different from the infinite model value.
With respect to a factorial design with 2 factors with random and finite levels the
following relationship exists. Let ij represent the interaction variables. Then
q1
X

ij = 0 for all j = 1, . . . , q2 ,

i=1

and

q2
X

ij = 0 for all i = 1, . . . , q1 ,

(8)

j=1

where q1 and q2 are the numbers of levels of the first and second factors in the two
populations.
Similarly for 3 factor interactions, ijk ,
q3
X
k=1
q2
X
j=1
q1
X

ijk = 0 for all i = 1, . . . , q1 , j = 1, . . . , q2 ,


ijk = 0 for all i = 1, . . . , q1 , k = 1, . . . , q3 ,
ijk = 0 for all j = 1, . . . , q2 , k = 1, . . . , q3 .

i=1

and
(9)

This concept can be extended to any number of factors. The same principles regarding
interactions can be applied to nesting factors if we visualize nesting as being a factorial
design with planned disconnectedness. For example, let the first factor be sires and the
second dams with 2 sires and 5 dams in the experiment. In terms of a factorial design
the subclass numbers (numbers per litter, eg.) are

Sires
1
2

1
5
0

Dams
3
8
0

2
9
0

4
0
7

5
0
10

If this were a variance component estimation problem, we could estimate s2 and e2


2
2
2
and this would usually be called d/s
.
. We can estimate d2 + sd
but not d2 and sd

Covariance Matrices
Consider the model
y = X +

Zi ui + possible interactions + e.

(10)

The ui represent main effects. The ith factor has ti levels in the population. Under the traditional mixed model for variance components all ti infinity. In that case V ar(ui ) = Ii2
for all i, and all interactions have variance-covariance that are I times a scalar. Further,
all subvectors of ui and those subvectors for interactions are mutually uncorrelated.
Now with possible finite ti

1/(ti 1)

1
...

V ar(ui ) =

1/(ti 1)

i2 .

(11)

This notation denotes ones for diagonals and all off-diagonal elements = 1/(ti 1).
Now denote by gh the interactions between levels of ug and uh . Then there are tg th
interactions in the population and the variance-covariance matrix has the following form,
where i denotes the level of the g th factor and j the level of the hth factor. The diagonals
are V ar(gh ).
All elements ij with ij 0 = V ar(gh )/(th 1).
All elements ij with i0 j = V ar(gh )/(tg 1).
All elements ij with i0 j 0 = V ar(gh )/(tg 1)(th 1).
5

(12)

i0 denotes not equal to i, etc.


To illustrate suppose we have two levels of a first factor and 3 levels of a second. The
variance-covariance matrix of

11
12
13
21
22
23

1 1/2 1/2 1 1/2


1/2

1
1/2 1/2 1
1/2

1
1/2 1/2
1
2
=
gh

1
1/2
1/2

1
1/2
1

Suppose that tg infinity. Then the four types of elements of the variance-covariance
matrix would be
[1, 1/(th 1), 0, 0] V ar(gh ).
This is a model sometimes used for interactions in the two way mixed model with levels
of columns fixed.
Now consider 3 factor interactions, f gh . Denote by i, j, k the levels of uf , ug , and
uh , respectively. The elements of the variance-covariance matrix except for the scalar,
V ar(f gh ) are as follows.
all diagonals
ijk with ijk 0
ijk with ij 0 k
ijk with i0 jk
ijk with ij 0 k 0
ijk with i0 jk 0
ijk with i0 j 0 k
ijk with i0 j 0 k 0

=
=
=
=
=
=
=
=

1.
1/(th 1).
1/(tg 1).
1/(tf 1).
1/(tg 1)(th 1)
1/(tf 1)(th 1)
1/(tf 1)(tg 1)
1/(tf 1)(tg 1)(th 1)

(13)

To illustrate, a mixed model with ug , uh fixed and tf infinity, the above become 1,
1/(th 1), 1/(tg 1), 0, k 1/(tg 1)(th 1), 0, 0, 0. If levels of all factors infinity,
the variance-covariance matrix is IV ar(f gh ).
Finally let us look at 4 factor interactions ef gh with levels of ue , uf , ug , uh denoted
by i, j, k, m, respectively. Except for the scalar V ar(ef gh ) the variance-covariance matrix
has elements like the following.
all diagonals = 1.
ijkm with ijkm0 = 1/(th 1), and
ijkm with ijk 0 m = 1/(tg 1), and
etc.
6

ijkm with ijk 0 m0


ijkm with ij 0 km0
etc.
0 0 0
ijkm with ij k m
ijkm with i0 jk 0 m0
etc.
0 0 0 0
ijk with i j k m

= 1/(tg 1)(th 1), and


= 1/(tf 1)(th 1)
= 1/(tf 1)(tg 1)(th 1) and
= 1/(te 1)(tg 1)(th 1)
= 1/(te 1)(tf 1)(tg 1)(th 1).

(14)

Note that for all interactions the numerator is 1, the denominator is the product of the
t 1 for subscripts differing, and the sign is plus if the number of differing subscripts is
even, and negative if the number of differing subscripts is odd. This set of rules applies
to any interactions among any number of factors.

Estimability and Predictability

Previous chapters have emphasized the importance of consideration of estimability


when X does not have full column rank, and this is usually the case in application. Now
if we apply the same rules given in Chapter 2 for checking estimability and find that an
element of , eg. , is estimable, the resulting estimate can be meaningless in sampling
from finite populations. To illustrate suppose we have a model,
yij = + si + eij .
Suppose that the si represent a random sample of 2 from a finite population of 5 correlated
sires. Now X is a column vector of 1s and consequently is estimable by our usual rules.
It seems obvious, however, that an estimate of has no meaning except as we define the
population to which it refers. If we estimate by GLS does
refer to the mean averaged
over the 2 sires in the sample or averaged over the 5 sires in the population? Looking
at the problem in this manner suggests that we have a problem in prediction. Then the
P
above question can be formulated as two alternatives, namely prediction of + 15 5t=1 si
P
versus prediction of + 21 2i=1 si , where the second alternative involves summing over
the 2 sires in the sample. Of course we could, if we choose, predict + k0 s, where k is any
vector with 5 elements and with k0 l = l. The variance of
, the GLS estimator or the
solution to in mixed model equations, is identical to the variance of error of prediction of
P
P
+.2 5i=1 si and not equal to the variance of error of prediction of +.5 2i=1 si . Let us
illustrate with some data. Suppose there are 20, 5 observations on sires 1, 2 respectively.
Suppose R = 50 I and

4 1 1 1

4 1 1

4 1
G =

1
1
1
1
4

Then the mixed model coefficient matrix (not including s3 ,s4 ,s5 ) is

15 12
1
20

30
with inverse

3
2

11

36 21 6
1
26
1

.
9
26

This gives the solution

6
3

1 =
2 2
s
9
2
2
s2

y1
y2

The variance of error of prediction of + .5(s1 + s2 ) is


(1 .5 .5) (Inverse matrix) (1 .5 .5)0 = 2.5.
This is not equal to 4, the variance of
from the upper left diagonal of the inverse.
Now let us set up equations for BLUP including all 5 sires. Since G is now singular
we need to use one of the methods of Section 5.10. The non-symmetric set of equations is

.5
1.5
0
.5
.5
.5

.4
2.6
.4
.4
.4
.4

.1
.1
1.4
.1
.1
.1

0
0
0
1.
0
0

0
0
0
0
1.
0

0
0
0
0
0
1.

.4
1.6
.4
.4
.4
.4

.1
.1
.4
.1
.1
.1

Post-multiplying the inverse of this coefficient matrix by


1 00
0 G

we get as the prediction error variance matrix the following


36 21 6
9
9
9

26
1 9 9 9

26
9
9
9

36 9 9

36 9
36

91

y1.
y2.

The upper 3 3 submatrix is the same as the inverse when only sires 1 and 2 are included.
The solution is

6
3

2 2
!
!

y1.
2

1 2

.
= 9
0
s
y2.
0

0
0
0
0
s3 , s4 , s5 = 0
as would be expected because these sires are unrelated to the 2 with progeny relative to
the population of 5 sires. The solution to
, s1 , s2 are the same as before. The prediction
P
error variance of + .2
si is
(1 .2 .2 .2 .2 .2) (Inverse matrix) (1 .2 .2 .2 .2 .2)0 = 4,
the value of the upper diagonal element of the inverse. By the same reasoning we find
P
that sj is BLUP of sj .2 5i=1 si and not of si .5 (s1 + s2 ) for i=1,2. Using the former
function with the inverse of the matrix of the second set of equations we obtain for s1
the value, 2.889. This is also the value of the corresponding diagonal. In contrast the
variance of the error of predition of s1 .5 (s1 + s2 ) is 1.389. Thus sj is the BLUP of
P
sj .2 5i=1 si .
The following rules insure that one does not attempt to predict K0 + M0 u that is
not predictable.
1. K0 must be estimable in a model in which E(y) = X.
2. Pretend that there are no missing classes or subclasses involving all levels of ui in the
population.
3. Then if K0 + M0 u is estimable in such a design with u regarded as fixed, K0 + M0 u
is predictable.
Use the rules of Chapter 2 in checking estimability.
For an example suppose we have sire treatment design with 3 treatments and 2
sires regarded as a random sample from an infinite population of possibly related sires.
Let the model be
yijk = + si + tj + ij + eijk .
, tj are fixed
V ar(s) = Is2 .
9

Var () when are ordered treatments in sires is

B 0 0
0 B 0
..
..
..
.
.
.
0 0 B

where

1
1/2 1/2
2
1
1/2
B =
1/2
.
1/2 1/2
1
Suppose we have progeny on all 6 sire treatment combinations except (2,3). This
creates no problem in prediction due to rule 1 above. Now we can predict for example
t1 +

ci (si + i1 ) t2

i=1

di (si + i2 )

i=1

where
X
i

ci =

di = 1.

That is, we can predict the difference between treatments 1 and 2 averaged over any sires
in the population, including some not in the sample of 2 sires if we choose to do so. In
fact, as we shall see, BLUE of (t1 t2 ) is BLUP of treatment 1 averaged equally over all
sires in the population minus treatment 2 averaged equally over all sires in the population.
Suppose we want to predict the merit of sire 1 versus sire 2. By the rules above,
(s1 s2 ) is not predictable, but
s1 +

3
X

cj (tj + ij ) s2

j=1

3
X

dj (tj + ij )

j=1

is predictable if j cj = j dj = 1. That is, we can predict sire differences only if we


specify treatments, and obviously only treatments 1, 2, 3. We cannot predict unbiasedly,
from the data, sire differences associated with some other treatment or treatments. But
note that even though subclass (2,3) is missing we can still predict s1 +t3 +13 s2 t3 23 .
In contrast, if sires as well as treatments were fixed, this function could not be estimated
unbiasedly.

BLUP When Some ui Are Finite

Calculation of BLUE and BLUP when there are finite levels of random factors must
take into account the fact that there may be singular G. Consider the simple one way
10

case with a population of 4 related sires. Suppose


1. .2 .3 .5

1. .2 .6

.
A =

1
.5
1.6

Suppose we have progeny numbers on these sires that are 9, 5, 3, 0. Suppose the model
is
yijk = + si + eij .
V ar(s) = As2 .
V ar(e) = Ie2 .
Then if we wish to include all 4 sires in the mixed model equations we must resort to the
methods of Sect. 5.10 since G is singular. One of those methods is to solve

2
As

1
0

s1
.
.
s4

17
9
5
3
0

9
9
0
0
0

5
0
5
0
0

3
0
0
3
0

0
0
0
0
0

2
e +

1 00
0 As2

y..
y1.
y2.
y3.
0

0
0
0
0
0

0
1
0
0
0

0
0
1
0
0

0
0
0
1
0

0
0
0
0
1

/e2 .

is BLUP of +

1 X
si .
4 i

sj is BLUP of sj

1 X
si .
4 i

(15)

The inverse of the coefficient matrix post-multiplied by


1 00
0 As2

is the variance-covariance matrix of errors of predictions of these functions.


If we had chosen to include only the 3 sires with progeny, the mixed model equations
would be

17
9
5
3

9
9
0
0

5
0
5
0

3
0
0
3

e2

0
0
0
0

.2
.3
11

0
.2
1
.2

0
1

.3.
.2

2
s

s1
s2
s3

y..
y1.
y2.
y3.

e2 .

(16)

This gives the same solution to


, s1 , s2 , s3 as the solution to (15), and the inverse of the
coefficient matrix gives the same prediction variances. Even though s4 is not included,

P
P
predicts + 41 4i=1 si , and sj predicts sj 14 4i=1 si . s4 can be computed by
1

1 .2 .3

[.5 .6 .5] .2 1 .2
.3 .2 1

s1

2
s
.
s3

As another example suppose we have a sire by treatment model with an infinite


population of sires. The nij are
1
0
9
6

1
2
3

2
8
2
0

Var (s) = 2I, Var (e) = l0 I,


Var () including missing subclasses is
1 1 0
0

1
0
0

1 1

0
0
0
0
0
0
0
0
1 1
1

/2.

If we do not include 11 and 32 in the solution the only submatrix of G that is singular
is the 2x2 block pertaining to 21 , 22 . The GLS equations regarding u as fixed are

1
10

0 0
11 0
6

0
9
6
15

8
2
0
0
10

8
0
0
0
8
8

0
9
0
9
0
0
9

0
2
0
0
2
0
0
2

0
0
6
6
0
0
0
0
6

12

s1

2
s

3
t
1

t2


12


21

22

31

y1..
y2..
y3..
y.1.
y.2.
y12.
y21.
y22.
y31.

1
10

(17)

Then we premultiply the 7th and 8th equations of (17) by


!

1 1
1
1

/2

and add to the diagonal coefficients, (.5, .5, .5, 0, 0, 2, 1, 1, 2). The solution to the
resulting equations is BLUP. If we had included 12 and 32 , we would premultiply the
last 6 GLS equations (equations for ) by Var () and then add to the diagonals, (.5,
.5, .5, 0, 0, 1, 1, 1, 1, 1, 1). When all elements of a population are included in a BLUP
solution, an interesting property becomes apparent. The same summing to 0s occurs
in the BLUP solution as is true in the corresponding elements of the finite populations
described in Section 4.

An Easier Computational Method

Finite populations complicate computation of BLUE and BLUP because non-diagonal


and singular G matrices exist. But if the model is that of Section 2, that is, finite
populations of unrelated elements with common variance, computations can be carried
do not always predict the same
out with diagonal submatrices for G. The resulting u
functions predicted by using the actual G matrices, but appropriate linear functions of
them do. We illustrate with a simple one way case.
yij = + ai + eij .

i = 1, 2, 3.

2 1 1

2 1
V ar(a) = 1
,
1 1
2
V ar(e) = 10I.
ni = (5, 3, 2), yi. = (10, 8, 6).
Using singular G the nonsymmetric mixed model equations are

1.
.5
.1
.4

.5
.3
.2
2.
.3 .2
.5 1.6 .2
.5 .3 1.4

a
1
a
2
a
3

The solution is [2.4768, -.2861, .0899, .1962]. Note that


X

ui = 0.

13

2.4
.6
0
.6

(18)

We can obtain the same solution by pretending that V ar(a) = 3I. Then the mixed
model equations are

1
.5
.3
.2
.5 .5 + 31
0
0
1
.3
0
.3 + 3
0
.2
0
0
.2 + 31

a
1
a
2
a
3

2.4
1.0
.8
.6

(19)

The inverse of (15.19) is different from the inverse of (15.18) post-multiplied by


1 00
0 G

The inverse of (19) does not yield prediction error variances. To obtain prediction error
variances of + a
. and of ai a
. pre-multiply it by

1
3

3
1
1
1
0
2 1 1

0 1
2 1
0 1 1
2

and post-multiply that product by the transpose of this matrix. This is a consequence of
the fact that the solution to (19) is BLUP of

1
3

3
1
1
1

0
2 1 1 s1

0 1
2 1 s2
0 1 1
2
s3

In most cases use of diagonal G does not result in the same solution as using the true G,
and the inverse never yields directly the prediction error variance-covariance matrix.
Rules for deriving diagonal submatrices of G to use in place of singular submatrices
follow. For main effects say of ui with ti levels substitute for the G submatrix described
2
in Section 6, Ii
, where
2
i
=

ti
ti1

i2

X
j

X
j,k,m

X
ti
ti
2
ij2 +
ijk
(ti1 )(tj1 )
j,k (ti1 )(tj1 )(tk1 )

ti
2
etc.
(ti1 )(tj1 )(tk1 )(tm1 ) ijkm

for 5 factor, 6 factor interactions.

(20)

i2 refers to the scalar part of the variance of the ith factor, ij2 refers to 2 factor interactions
2
involving ui , ijk
refers to 3 factor interactions involving ui , etc. Note that the signs
alternate
X
ti tj
ti tj
2
2
ij
=
ij2
ijk
(ti1 )(tj1 )
(ti1 )(tj1 )(tk1 )
k
14

X
k,m

2
ijk
=

ti tj
2
etc.
(ti1 )(tj1 )(tk1 )(tm1 ) ijkm

(21)

X
ti tj tk
ti tj tk
2
ijk

(ti1 )(tj1 )(tk1 )


m (ti1 )(tj1 )(tk1 )(tm1 )
2
ijk
+ etc.

(22)

Higher order interactions for 2 follow this same pattern with alternating signs. The
sign is positive when the number of factors in the denominator minus the number in the
numerator is even.
2
It appears superficially that one needs to estimate the different i2 , ij2 , ijk
, etc., and
this is difficult because non-diagonal, singular submatrices of G are involved. But if one
plans to use their diagonal representations, one might as well estimate the 2 directly by
any of the standard procedures for the conventional mixed model for variance components
estimation. Then if for pedagogical or other reasons one wishes estimates of 2 rather
than 2 , one can use equations (20), (21), (22) that relate the two to affect the required
linear transformation.

The solution using diagonal G should not be assumed to be the same as would have
been obtained from use of the true G matrix. If we consider predictable functions as
defined in Section 7 and take these same functions of the solution using diagonal G we
do obtain BLUP. Similarly using these functions we can derive prediction error variances
using a g-inverse of the coefficient matrix with diagonal G.

10

Biased Estimation

If we can legitimately assume that there is no expected pattern of values of the levels
of a fixed factor and no expected pattern of values of interactions between levels of fixed
factors, we can pretend that these fixed factors and interactions are populations with finite
levels and proceed to compute biased estimators as though we are computing BLUP of
random variables. Instead of prediction error variance as derived from the g-inverse of
the coefficient matrix we obtain estimated mean squared errors.

15

Chapter 16
The One-Way Classification
C. R. Henderson
1984 - Guelph

This and subsequent chapters will illustrate principles of Chapter 1-15 as applied
to specific designs and classification of data. This chapter is concerned with a model,
yij = + ai + eij .

(1)

Thus data can be classified with ni observations on the ith class and with the total of
observations in that class = yi. . Now (1) is not really a model until we specify what population or populations were sampled and what are the properties of these populations. One
possibility is that in conceptual repeated sampling and ai always have the same values,
and the eij are random samples from an infinite population of uncorrelated variables with
mean 0, and common variance, e2 . That is, the variance of the population of e is Ie2 ,
and the sample vector of n elements has expectation null and variance = Ie2 . Note that
Var(eij ) is assumed equal to Var(ei0 j ), i 6= i0 .

Estimation and Tests For Fixed a

Estimation and tests of hypothesis are simple under this model. The mixed model
equations are OLS equations since Zu does not exist and since V ar(e) = Ie2 . They are

1
e2

n.
n1
n2
..
.

n1
n1
0
..
.

n2 . . .
o
o
0 . . . a1
o

n2 . . .
a2
..
..
.
.

y..
y1.
y2.
..
.

1
.
e2

(2)

The X matrix has t + 1 columns, where t = the number of levels of a, but the rank is t.
None of the elements of the model is estimable. We can estimate
+

t
X

ki ai ,

i=1

where
X

ki = 1,

or

t
X

k i ai ,

if
X

ki = 0.

For example + ai is estimable, ai ai0 is estimable, and


a1

t
X

ki ai ,

i=2

with

t
X

ki = 1,

i=2

is estimable. The simplest solution to (2)


to the following g-inverse.

0 0

1
0 n1

0 0

.. ..
. .

is o = 0, aoi = y i. . This solution corresponds

0
...
0
...

n1
.
.
.
2

..
.

Let us illustrate with the following example


(n1 , n2 , n3 ) = (8, 3, 4),
(y1. , y2. , y3. ) = (49, 16, 13),
y0 y = 468.
The OLS equations are
15 8 3 4
1
8 0 0

3 0
e
4

o
ao1
ao2
ao3

78
49
16
13

1
.
e2

(3)

A solution is (0, 49/8, 16/3, 13/4). The corresponding g-inverse of the coefficient matrix
is

0 0
0
0

81 0
0

31 0 e
41
Suppose one wishes to estimate a1 a2 , a1 a3 , a2 a3 . Then from the above solution
these would be 49
16
, 49
13
, 16
13
. The variance-covariance matrix of these estimators
8
3
8
4
3
4

is

0 1 1
0

0 1
0 1

0 0
1 1

0
0
0
0

0
81
0
0

0
0
31
0

0
0
0
41

0
0
0
1
1
0
1
0
1
0 1 1

e2 .

(4)

We do not know e2 but it can be estimated easily by

e2 = (y0 y

yi.2 /ni )/(15 3)

= (468 427.708)/12
= 3.36.
Then we can substitute this for e2 to obtain estimated sampling variances.
Suppose we want to test the hypothesis that the levels of ai are equal. This can be
expressed as a test that

0 1 1
0
0 1
0 1

a1
a2
a3

V ar(K0 o ) = K0 (g inverse)K =
with
inverse =

2.4 .8
.8 2.9333

0
0

.45833 .125
.125 .375
!

e2

1
.
e2

K0 o = (.79167 2.875)0 .
Then
numerator SS = (.79167 2.875)

2.4 .8
2.9333

.79167
2.875

= 22.108.
The same numerator can be computed from
X yi.2
i

ni

y..2
= 427.708 405.6 = 22.108.
n.

Then the test that ai are equal is


hypothesis.

22.108/2
3.36

which is distributed as F2,12 under the null

Levels of a Equally Spaced

In some experiments the levels of a (treatments) are chosen to be equally spaced.


For example, if treatments are percent protein in the diet, the levels chosen might be 10%,
12%, 14%, 16%, 18%. Suppose we have 5 such treatments with ni = (5,2,1,3,8) and yi. =
(10,7,3,8,33). Let the full model be
yij = + 1 xi + 2 x2i + 3 x3i + 4 x4i + eij

(5)

where xi = (1,2,3,4,5). With Var(e) = I 2 the OLS equations under the full model are

19

64 270
270 1240
5886

1240
5886
28, 384
138, 150

5886
28, 384
138, 150
676, 600
3, 328, 686

1
2
3
4

61
230
1018
4784
23, 038

(6)

The solution is [-4.20833, 9.60069, -3.95660, .58681, -.02257]. The reduction in SS is


P
210.958 which is exactly the same as i yi.2 /ni . A common set of tests is the following.
1 = 0 assuming 2 , 3 , 4 non-existent.
2 = 0 assuming 3 , 4 non-existent.
3 = 0 assuming 4 non-existent.
4 = 0.
This can be done by computing the following reductions.
1. Red (full model).
2. Red (, 1 , 2 , 3 ).
3. Red (, 1 , 2 ).
4. Red (, 1 ).
5. Red ().
Then the numerators for tests above are reductions 4-5, 3-4, 2-3, 1-2 respectively.
Red (2) is obtained by dropping the last equation of (6). This gives the solution
(-3.2507, 7.7707, -2.8325, .3147) with reduction = 210.952. The other reductions by
successive dropping of an equation are 207.011, 206.896, 195.842. This leads to mean

squares each with 1 df.


Linear
11.054
Quadratic
.115
Cubic
3.941
Quartic
.006
The sum of these is equal to the reduction under the full model minus the reduction due
to alone.

Biased Estimation of + ai

Now we consider biased estimation under the assumption that values of a are unpatterned. Using the same data as in the previous section we assume for purposes of
illustration that V ar(e) = 65 I, and that the average values of squares and products of the
deviations of a from a are

4 1 1 1

4 1 1
1

4 1
8

1
1
1
1
4

(7)

Then the equations for minimum mean squared error estimation are

22.8
.9
1.35
2.1
.6
3.15

6.0
4.0
.75
.75
.75
.75

2.4
.3
2.2
.3
.3
.3

1.2
.15
.15
1.6
.15
.15

3.6
.45
.45
.45
2.8
.45

9.6
1.2
1.2
1.2
1.2
5.8

o
( )

The solution is (3.072, -.847, .257, -.031, -.281, .902). Note that
X

a
i = 0.

The estimates of differences between ai are


2
1 -1.103
2
3
4

3
-.816
.288

4
-.566
.538
.250

5
-1.749
-.645
-.933
-1.183

73.2
1.65
3.9
6.9
3.15
15.6

(8)

Contrast these with the corresponding BLUE. These are


2
1 -1.5
2
3
4

3
-1.0
.5

4
5
-.667 -2.125
.833 -.625
.333 -1.125
-1.458

Generally the absolute differences are larger for BLUE.


The mean squared error of these differences, assuming that e2 and products of deviations of a are correct, are obtained from a g-inverse post-multiplied by

1
0
0
0
0
0

0
4
1
1
1
1

0
1
4
1
1
1

0
1
1
4
1
1

0
1
1
1
4
1

0
1
1
1
1
4

These are
2
3
4
5
1 .388 .513 .326 .222
2
.613 .444 .352
3
.562 .480
4
.287
The corresponding values for BLUE are
2
3
4
1 .7 1.2 .533
2
1.5 .833
3
1.333
4

5
.325
.625
1.125
.458

If the priors used are really correct, the MSE for biased estimators of differences are
considerably smaller than BLUE.
The same biased estimators can be obtained by use of a diagonal P, namely .625I,
where
5
.625 =
(.5).
51
This gives the same solution vector, but the inverse elements are different. However, mean
squared errors of estimable functions such as the a
i a
j and
+ a yield the same results
when applied to the inverse.
6

Model with Linear Trend of Fixed Levels of a


Assume now the same data as section 2 and that the model is
yij = + xi + ai + eij

(9)

where xi = (1,2,3,4,5). Suppose that the levels of a are assumed to have no pattern and
we use a prior value on their squares and products =

Assume as before Var(e) =

5
6

.05

.2
...
.05

I. Then the equations to solve are

22.8
76.8
6.
2.4
1.2
3.6
76.8
324
6
4.8
3.6 14.4
.36 2.34 2.2 .12 .06 .18
.54 2.64 .3 1.48 .06 .18
.84 2.94 .3 .12 1.24 .18
.24 .24 .3 .12 .06 1.72
1.26
8.16 .3 .12 .06 .18

9.6
48
.48
.48
.48
.48
2.92

a
1
a
2
a
3
a
4
a
5

73.2
276
.66
1.56
2.76
1.26
6.24

(10)

The solution is [1.841, .400, -.145, .322, -.010, -.367, .200]. Note that
X

a
i = 0.

We need to observe precautions in interpreting the solution. is not estimable and neither
is + ai nor ai ai0 .
We can only estimate treatment means associated with the particular level of xi in the
experiment. Thus we can estimate + ai + xi where xi = 1,2,3,4,5 for the 5 treatments
respectively. The biased estimates of treatment means are
1. 1.841 + .400 - .145 = 2.096
2. 1.841 + .800 + .322 = 2.963
3. 1.841 + 1.200 - .010 = 3.031
4. 1.841 + 1.600 - .367 = 3.074
5. 1.841 + 2.000 + .200 = 4.041
The corresponding BLUE are the treatment means, (2.0, 3.5, 3.0, 2.667, 4.125).
If the true ratio of squares and products of ai to e2 are as assumed above, the biased
for the biased
estimators have minimum mean squared error. Note that E(
+a
i + xi )
estimator is + xi + some function of a (not equal to ai ). The BLUE estimator has, of
course, expectation, + xi + ai , that is, it is unbiased.
7

The Usual One Way Covariate Model

If, in contrast to xi being constant for every observation on the ith treatment as in
Section 4, we have the more traditional covariate model,
yij = + xij + ai + eij ,

(11)

we can then estimate + ai unbiasedly as well as ai ai0 . Again, however, if we think the
ai are unpatterned and we have some good prior value of their products, we can obtain
smaller mean squared errors by using the biased method.
Now we need to consider the meaning of an estimator of + ai . This really is an
estimator of treatment mean in hypothetical repeated sampling in which xi. = 0. What if
the range of the xij is 5 to 21 in the sample? Can we infer from this that the the response
to levels of x is that same linear function for a range of xij as low as 0? Strictly speaking
we can draw inferences only for the values of x in the experiment. With this in mind
we should really estimate + ai + k, where k is some value in the range of xs in the
experiment. With regard to treatment differences, ai ai0 , can be regarded as an estimate
of ( + ai + k) ( + ai0 + k), where k is in the range of the xs of the experiment.

Nonhomogenous Regressions
A still different covariate model is
yij = + i xij + ai + eij .

Note that in this model is different from treatment to treatment. According to the rules
for estimability + ai , ai ai0 , and i are all estimable. However, it is now obvious that
ai ai0 has no practical meaning as an estimate of treatment difference. We must specify
what levels of x we assume to be present for each treatment. In terms of a treatment
mean these are
+ ai + ki i
and
+ aj + kj j
and the difference is
ai + ki i aj kj j .
Suppose ki = kj = k. Then the treatment difference is
ai aj + k(i j ),
and this is not invariant to the choice of k when i 6= j . In contrast when all i = , the
treatment difference is invariant to the choice of k.
Let us illustrate with two treatments.
8

Treatment
1
2

ni
8
5

yi. xi.
38 36
43 25

0
25
0
135

x2ij
220
135

xij yij
219
208

This gives least squares equations


8 0

36
0
220

+ t1

+ t2
1
2

38
43
219
208

The solution is (1.0259, 12.1, .8276, -.7). Then the estimated difference, treatment 1
minus treatment 2 for various values for x, the same for each treatment, are as follows
x Estimated Difference
0
-11.07
2
-8.02
4
-4.96
6
-1.91
8
1.15
10
4.20
12
7.26
It is obvious from this example that treatment differences are very sensitive to the average
value of x.

The Usual One Way Random Model


Next we consider a model
y
V ar(a)
V ar(e)
Cov(a, e0 )

=
=
=
=

+ ai + eij .
Ia2 ,
Ie2 ,
0.

In this case it is assumed that the levels of a in the sample are a random sample from an
infinite population with var Ia2 , and similarly for the sample of e. The experiment may
have been conducted to do one of several things, estimate , predict a, or to estimate a2
and e2 . We illustrate these with the following data.

Levels of a ni
1
5
2
2
3
1
4
3
5
8

yi.
10
7
3
8
33

Let us estimate and predict a under the assumption that e2 /a2 = 10. Then we
need to solve these equations.

19

5
15

2
0
12

1
0
0
11

3
0
0
0
13

8
0
0
0
0
18

a
1
a
2
a
3
a
4
a
5

61
10
7
3
8
33

(12)

The solution is [3.137, -.379, .061, -.012, -.108, .439]. Note that a
i = 0. This could have
been anticipated by noting that the sum of the last 4 equations minus the first equation
gives
X
10
a
i = 0.
The inverse of the coefficient matrix is
.0790 .0263 .0132 .0072 .0182 .0351

.0754
.0044
.0024
.0061
.0117

.0855
.0012
.0030
.0059

.0916
.0017
.0032

.0811
.0081
.0712

(13)

This matrix premultiplied by (0 1 1 1 1 1) equals (-1 1 1 1 1 1)(a2 /e2 ). This is always a


check on the inverse of the coefficient matrix in a model of this kind. From the inverse
V ar(
) = .0790 e2 ,
V ar(
a1 a1 ) = .0754 e2 .

is BLUP of + the mean of all a in the infinite population. Similarly a


i is BLUP of ai
minus the mean of all ai in the infinite population.
Let us estimate a2 by Method 1. For this we need i yi.2 /ni and y..2 /n. and their expectations. These are 210.9583 and 195.8421 with expectations, 19a2 +5e2 and 5.4211a2 +e2
respectively ignoring 19 2 in both.
P

e2 = (y0 y 210.9583)/(19 5).


10

Suppose this is 2.8. Then


a2 = .288.
Let us next compute an approximate MIVQUE estimate using the prior e2 /a2 = 10,
the ratio used in the BLUP solution. We shall use
e2 = 2.8 from the least squares
0 a
= .35209 and
residual rather than a MIVQUE estimate. Then we need to compute a
its expectation. The expectation is trV ar(
a). But V ar(
a) = Ca V ar(r)C0a , where Ca is
the last 5 rows of the inverse of the mixed model equations (12), and r is the vector of
right hand sides.

V ar(r) =

5
5
0
0
0
0

2
0
2
0
0
0

1
0
0
1
0
0

3
0
0
0
3
0

8
0
0
0
0
8

19 5 2 1

5 0 0

2 0

3
0
0
0
3

8
0
0
0
0
8

5
5
0
0
0
0

2
0
2
0
0
0

1
0
0
1
0
0

3
0
0
0
3
0

8
0
0
0
0
8

a2 +

e2 .

This gives
) = .27163 a2 + .06802 e2 ,
E(
a0 a
a2 = .595.
and using
e2 = 2.8, we obtain

Finite Levels of a

Suppose now that the five ai in the sample of our example of Section 7 comprise all
of the elements of the population and that they are unrelated. Then

..

V ar(a) =

.25

1
.25

.
1

a2 .

Let us assume that e2 /a2 = 12.5. Then the mixed model equations are the OLS equations
premultiplied by

1
0
0
0
0
0

.08 .02 .02 .02 .02

.08
.02
.02
.02

.
(14)

.08 .02 .02

.08 .02
.08
11

This gives the same solution as that to (11). This is because a2 of the infinite model is
times a2 of the finite model. See Section 15.9. Now
is a predictor of
+

1X
ai
5 i

aj

1X
ai .
5 i

5
4

and a
j is a predictor of

Let us find the Method 1 estimate of a2 in the finite model. Again we compute i yi.2 /ni
and y..2 /n. . Then the coefficient of e2 in each of these is the same as in the infinite model,
that is 5 and 1 respectively. For the coefficients of a2 we need the contribution of a2 to
V ar(rhs). This is
P

5
5
0
0
0
0

2
0
2
0
0
0

38.5

1
0
0
1
0
0

3
0
0
0
3
0

8
0
0
0
0
8

14

1
..

14

(left matrix)0

7.5 4.5 3.5 3.0


42.
25.0 2.5 1.25 3.75 10.

4.0
.5 1.5 4.
.
1. .75 2.

9. 6.
64.

(15)

Then the coefficient of a2 in i yi.2 /ni is tr[dg(5, 2, 1, 3, 8)]1 times the lower 55 submatrix
of (15) = 19.0. The coefficient of a2 in y..2 /n. = 38.5/19 = 2.0263. Thus we need only
the diagonals of (15). Assuming again that e2 = 2.8, we find
a2 = .231. Note that in
the infinite model
a2 = .288 and that 45 (.231) = .288 except for rounding error. This
demonstrates that we could estimate a2 as though we had an infinite model and estimate
and predict a using
a2 /
e2 in mixed model equations for the infinite model. Remember
that the resulting inverse does not yield directly V ar(
) and V ar(
a a). For this preand post-multiply the inverse by
P

1
5

5
0
0
0
0
0

1
4
1
1
1
1

1
1
4
1
1
1

1
1
1
4
1
1

1
1
1
1
4
1

1
1
1
1
1
4

This is in accord with the idea that in the finite model


is BLUP of + a. and a
i is
BLUP of ai a. .
12

One Way Random and Related Sires

We illustrate the use of the numerator relationship matrix in evaluating sires in a


simple one way model,
yij
V ar(s)
V ar(e)
Cov(s, e0 )
e2 /s2

=
=
=
=
=

+ si + eij .
As2 ,
Ie2 ,
0,
10.

Then mixed model equations for estimation of and prediction of s are

n. n1. n2. . . .
0 0 0
...

2
1
2
n1. n1. 0 . . .
0
A


e /s
+
0
n2. 0

..
..
..
.
.
.

s1
s2
..
.

= (y.. y1. y2. . . . )0 .

(16)

We illustrate with the numerical example of section 7 but now with

A=

0 .5 .5 0

1. 0
0 .5

1. .25 0
.
1 0

The resulting mixed model equations are

19

5
65/3

2
1
3
8

0
20/3 20/3
0
s1

46/3
0
0
20/3 s2

s =
43/3
0
0
3

s4

49/3
0
64/3
s5

61
10
7
3
8
33

(17)

The solution is (3.163, -.410, .232, -.202, -.259, .433). Note that i si 6= 0 in contrast to
the case in which A = I. Unbiased estimators of e2 and s2 can be obtained by computing
Method 1 type quadratics, that is
P

y0 y

X
i

13

yi.2 /ni

and
X

yi.2 /ni C.F.

However, the expectations must take into account the fact that V ar(s) 6= Is2 , but rather
As2 . In a non-inbred population
E(y0 y) = n. (s2 + e2 ).
For an inbred population the expectation is
X

ni aii a2 + n. e2 ,

where aii is the ith diagonal element of A. The coefficients of e2 in yi.2 /ni and y..2 /n. are
the same as in an unrelated sample of sires. The coefficients of s2 require the diagonals
of V ar(rhs). For our example, these coefficients are
P

5
5
0
0
0
0

2
0
2
0
0
0

1
0
0
1
0
0

3
0
0
0
3
0

8
0
0
0
0
8

A (left matrix)0

140.5 35. 12. 4.25 17.25 72.

25.
0 2.5
7.5
0

4.
0
0 8.

.
=
1.
.75
0

9.
0
64.

(18)

Then the coefficient of a2 in i yi.2 /ni is tr(dg(0, 51 , 21 , 1, 31 , 81 )) times the matrix


in (18) = 19. The coefficient of a2 in y..2 /n. = 140.5/19 = 7.395.
P

If we wanted an approximate MIVQUE we could compute rather than


X
i

2
y1.
y2
..
ni
n.

of Method 1, the quadratic,


0 A1 u
= .3602.
u
The expectation of this is
tr(A1 V ar(s)).
V ar(s) = Cs V ar(rhs) C0s .
14

Cs is the last 5 rows of the inverse of the mixed model coefficient matrix.
V ar(rhs) = Matrix (18) s2 + (OLS coefficient matrix) e2 .
Then

.0788 .0527
.0443
.0526 .0836

.0425
.0303
.0420
.0561
2

.0285
.0283 .0487
V ar(s) =
s +

.0544 .0671

.1014

.01284 .00774
.00603
.00535 .01006

.00982
.00516
.00677
.00599

.00731
.00159 .00675

e .

.01133 .00883

.01462
s0 A1s = .36018, with expectation .05568 e2 + .22977 s2 .
e2 for approximate MIVQUE
can be computed from
X
y0 y
yi.2 /ni. .
i

15

Chapter 17
The Two Way Classification
C. R. Henderson
1984 - Guelph

This chapter is concerned with a linear model in which


yijk = + ai + bj + ij + eijk .

(1)

For this to be a model we need to specify whether a is fixed or random, b is fixed or


random, and accordingly whether is fixed or random. In the case of random subvectors
we need to specify the variance-covariance matrix, and that is determined in part by
whether the vector sampled is finite or infinite.

The Two Way Fixed Model

We shall be concerned first with a model in which a and b are both fixed, and as a
consequence so is . For convenience let
ij = + ai + bj + ij .

(2)

Then it is easy to prove that the only estimable linear functions are linear functions of
ij that are associated with filled subclasses (nij > 0). Further notations and definitions
are:
Row mean =
i. .
(3)
Its estimate is sometimes called a least squares mean, but I agree with Searle et al. (1980)
that this is not a desirable name.
Column mean
Row effect
Column effect
General mean
Interaction effect

=
=
=
=
=

.j .

i.
.. .

.j
.. .

.. .
ij
i. .
.j +
.. .

(4)
(5)
(6)
(7)
(8)

From the fact that only ij for filled subclasses are estimable, missing subclasses result in
the parameters of (17.3) . . . (17.8) being non-estimable.

i0 . is not estimable if any ni0 j = 0.

.j 0 is not estimable if any nij 0 = 0.

.. is not estimable if one or more nij = 0.


1

All row effects, columns effects, and interaction effects are non-estimable if one or more
nij = 0. Due to these non-estimability considerations, mimicking of either the balanced
or the filled subclass estimation and tests of hypotheses wanted by many experimenters
present obvious difficulties. We shall present biased methods that are frequently used and
a newer method with smaller mean squared error of estimation given certain assumptions.

BLUE For The Filled Subclass Case

Assuming that V ar(e) = Ie2 , it is easy to prove that


ij = yij. . Then it follows that
BLUE of the ith row mean in the filled subclass case is
1 Xc
y .
(9)
j=1 ij.
c
BLUE of j th column mean is
1 Xr
y .
i=1 ij.
r

(10)

r = number of rows, and


c = number of columns.
BLUE of ith row effect is

1 X
1 X X
y

y .
(11)
ij.
j
i
j ij.
c
rc
Thus BLUE of any of (17.3), . . . , (17.8) is that same function of
ij , where
ij = yij. .
The variances of any of these functions are simple to compute. Any of them can be
P P
expressed as i j kij ij with BLUE =
X X
i

kij yij. .

(12)

The variance of this is


e2

X X
i

kij2 /nij .

(13)

The covariance between BLUEs of linear functions,


X X
i

kij yij. and

X X
i

tij yij.0

is
e2

X X
i

kij tij /nij .

(14)

The numbers required for tests of hypotheses are (17.13) and (17.14) and the associated
BLUEs. Consider a standard ANOVA, that is, mean squares for rows, columns, R C.
The R C sum of squares with (r 1)(c 1) d.f. can be computed by
X X
i

2
yij.
Reduction under model with no interaction.
nij

(15)

The last term of (17.15) can be obtained by a solution to


Di Nij
Nij Dj

Di
Dj
Nij
0
yi
0
yj

diag (n1. , n2. , . . . , nr. ).


diag (n.1 , n.2 , . . . , n.c ).
matrix of all nij .
(y1.. , . . . , yr.. ).
(y.1. , . . . , y.c. ).

=
=
=
=
=

ao
bo

yi
yj

(16)

Then the reduction is


(ao )0 yi + (bo )0 yj .

(17)

Sums of squares for rows and columns can be computed conveniently by the method of
weighted squares of means, due to Yates (1934). For rows compute
i =

1 X
y (i = 1, . . . , r), and
j ij.
c
ki1 =

(18)

1 X 1
.
j n
c2
ij

Then the row S.S. with r 1 d.f. is


X
i

ki i2 (

k )2 /
i i i

k.
i i

(19)

The column S.S. with c 1 d.f. is computed in a similar manner. The error mean
square for tests of these mean squares is
(y0 y

X X
i

y 2 /nij )/(n..
j ij.

rc).

(20)

An obvious limitation of the weighted squares of means for testing rows is that the test
refers to equal weighting of subclasses across columns. This may not be what is desired
by the experimenter.
An illustration of a filled subclass 2 way fixed model is a breed by treatment design
with the following nij and yij. .

Breeds
1
2
3
4

1
5
4
5
4

nij
2
2
2
1
5

Treatments
yij.
3 1 2
1 68 29
2 55 30
4 61 13
4 47 65
3

3
19
36
61
75

XX
i

2
yij.
/nij = 8207.5.

Let us test the hypothesis that interaction is negligible. The reduction under a model
with no interaction can be obtained from a solution to equation (17.21).

8 0

0
0
10

0
0
0
13

5
4
5
4
18

2
2
1
5
0
10

b1
b2
b3
b4
t1
t2

116
121
135
187
231
137

(21)

The solution is (18.5742, 18.5893, 16.3495, 17.4624, -4.8792, -4.0988)0 . The reduction is
8187.933. Then R C S.S. = 8207.5 - 8187.923 = 19.567. S. S. for rows can be formulated
as a test of the hypothesis

1 1 1 0 0 0 0 0 0 1 1 1 11

. = 0.
0
K = 0 0 0 1 1 1 0 0 0 1 1 1
..

0 0 0 0 0 0 1 1 1 1 1 1
43

The
ij are (13.6, 14.5, 19.0, 13.75, 15.0, 18.0, 12.2, 13.0, 15.25, 11.75, 13.0, 18.75).
= (3.6 3.25 3.05)0 .
K0
= K0 [diag (5 2 . . . 4)]1 Ke2
V ar(K0 )

2.4 .7
.7

2
1.95 .7
=
e .
2.15
= 20.54 = SS for rows.
1 K0
0 [V ar(K0 )]
e2 (K0 )
SS for cols. is a test of
K0 =

1 0 1 1 0 1 1 0 1 1 0 1
0 1 1 0 1 1 0 1 1 0 1 1
0

=
K

19.7
15.5

= 0.

2.9 2.0
4.2

=
V ar(K0 )

.
!

e2 .

0 [V ar(K0 )]
1 K0
= 135.12 = SS for Cols.
e2 (K0 )
Next we illustrate weighted squares of means to obtain these same results. Sums of
squares for rows uses the values below

i
ki
15.7
5.29412
15.5833
7.2
13.4833 6.20690
14.5
12.85714

1
2
3
4

X
i

ki i2 = 6885.014.

ki i )2 /

ki = 6864.478.
Diff. = 20.54 as before.
i

Sums of squares for columns uses the values below


bj
kj
1 12.825 17.7778
2 13.875 7.2727
3 17.75
8.

X
X

kj )2 /

kj b2j = 6844.712.

kj = 6709.590.
Diff. = 135.12 as before.

Another interesting method for obtaining estimates and tests involves setting up least
squares equations using Lagrange multipliers to impose the following restrictions
X
Xi
i

ij = 0 for i = 1, . . . , r.
ij = 0 for j = 1, . . . , c.
o = 0

A solution is
b0 = (0, 14, 266, 144)/120.
t0 = (1645, 1771, 2236)/120.
0 = (13, 31, 44, 19, 43, 62, 85, 55, 140, 91, 67, 158)/120.
Using these values,
ij are the yij. , and the reduction in SS is
X X
i

2
yij.
/nij. = 8207.5.

Next the SS for rows is this reduction minus the reduction when bo is dropped from
the equations restricted as before. A solution in that case is
t0 = (12.8133, 14.1223, 17.3099).
0 = (.4509, .4619, .0110, .4358, .1241, .3117,
.0897, 1.4953, 1.4055, .7970, .9093, 1.7063),
and the reduction is 8186.960. The row sums of squares is
8207.5 8186.960 = 20.54 as before.
Now drop to from the equations. A solution is
0 = (13.9002, 15.0562, 13.5475, 14.4887).
b
0 = (.9648, .9390, 1.9039, .2751, .2830, .5581,

.0825, .1309, .0485, 1.1574, 1.3530, 2.5104),


and the reduction is 8072.377, giving the column sums of squares as
8207.5 8072.377 = 135.12 as before.
An interesting way to obtain estimates under the sum to 0 restrictions in is to solve
bo
to

0
X
X
0

0y
,
=X

0 is the submatrix of X
referring to b, t only, and y
is a vector of subclass means.
where X
These equations are

3 0 0 0

3 0 0

3 0

1
1
1
1
4

1
1
1
1
0
4

1
1
1
1
0
0
4

b1
b2
b3
b4
t1
t2
t3

47.1
46.75
40.45
43.5
51.3
55.5
71.0

(22)

A solution is
b0 = (0, 14, 266, 144)/120,
t0 = (1645, 1771, 2236)/120.
This is the same as in the restricted least squares solution. Then
ij = yij. boi toj ,
which gives the same result as before. More will be said about these alternative methods
in the missing subclass case.
6

The Fixed, Missing Subclass Case

When one or more subclasses is missing, the estimates and tests described in Section 2
cannot be effected. What should be done in this case? There appears to be no agreement
among statisticians. It is of course true that any linear functions of ij in which nij > 0
can be estimated by BLUE and can be tested, but these may not be of any particular
interest to the researcher. One method sometimes used, and this is the basis of a SAS
Type 4 analysis, is to select a subset of subclasses, all filled, and then to do a weighted
squares of means analysis on this subset. For example, suppose that in a 3 4 design,
subclass (1,2) is missing. Then one could discard all data from the second column, leaving
a 33 design with filled subclasses. This would mean that rows are compared by averaging
over columns 1,3,4 and only columns 1,3,4 are compared, these averaged over the 3 rows.
One could also discard the first row leaving a 2 4 design. The columns are compared
by averaging over only rows 2 and 3, and only rows 2 and 3 are compared, averaging over
all 4 columns. Consequently this method is not unique because usually more than one
filled subset can be chosen. Further, most experimenters are not happy with the notion
of discarding data that may have been costly to obtain.
Another possibility is to estimate ij for missing subclasses by some biased procedure.
For example, one can estimate ij such that E(
ij ) = + ai + bj + some function of the
ij associated with filled subclasses. One way of doing this is to set up least squares
equations with the following restrictions.
X

j ij

i ij
ij

= 0 for i = 1, . . . , r.
= 0 for j = 1, . . . , c.
= 0 if nij = 0.

This is the method used in Harveys computer package. When equations with these
restrictions are solved,

ij = + aoi + boj + ijo = yij. ,


when nij > 0 and thus is unbiased. A biased estimator for a missing subclass is o +aoi +boj ,
P P
and this has expectation + ai + bj + i j kij ij , where summation in the last term is
P P
over filled subclass and i j kij = 1. Harveys package does not compute this but does
produce least squares means for main effects and some of these are biased.
Thus
ij is BLUE for filled subclasses and is biased for empty subclasses. In the class
of estimators of ij with expectation + ai + bj + some linear function of ij associated
with filled subclasses, this method minimizes the contribution of quadratics in to mean
squared error when the squares and products of the elements of are in accord with no
particular pattern of values. This minimization might appear to be a desirable property,
but unfortunately the method does not control contributions of e2 to MSE. If one wishes
to minimize the contribution of e2 , but not to control on quadratics in , while still
having E(
ij ) contain + ai + bj , the way to accomplish this is to solve least squares
7

equations with dropped. Then the biased estimators in this case for filled as well as
empty subclasses, are

ij = o + aoi + boj .
(23)
A third possibility is to assume some prior values of e2 and squares and products
of ij and compute as in Section 9.1. Then all
ij are biased by ij but have in their
expectations + ai + bj . Finally one could relax the requirement of + ai + bj in the
expectation of
ij . In that case one would assume average values of squares and products
of the ai and bj as well as for the j and use the method described in Section 9.1.
Of these biased methods, I would usually prefer the one in which priors on the ,
but not on a and b are used. In most fixed, 2 way models the number of levels of a and
b are too small to obtain a good estimate of the pseudo-variances of a and b.
We illustrate these methods with a 4 3 design with 2 missing subclasses as follows.
nij
2 3
2 3
2 0
0 1

1
1 5
2 4
3 3

4
2
5
4

1
30
21
12

yij.
2 3 4
11 13 7
6 9
3 15

A Method Based On Assumption ij = 0 If nij = 0

First we illustrate estimation under sum to 0 model for and in addition the assumption that 23 = 32 = 0. The simplest procedure for this set of restrictions is to solve
for ao , bo in equations (17.24).

4 0 0 1

3 0 1

3 1

1
1
0
0
2

1
0
1
0
0
2

1
1
1
0
0
0
3

ao
bo

19.333
10.05
10.75
15.25
8.5
7.333
9.05

(24)

+ 11
+ 13
+ 27 = 19.333, etc. for others. A solution is
The first right hand side is 30
5
2
3
(3.964, 2.286, 2.800, 2.067, 1.125, .285, 0). The estimates of ij are yij. for filled subclasses
and 2.286 + .285 for
23 and 2.800 + 1.125 for
32 . If ij are wanted they are
11 = y11 3.964 1.125
etc, for filled subclasses, and 0 for 23 and 32 .
8

The same results can be obtained, but with much heavier computing by solving least
P
P
squares equations with restrictions on that are j ij = 0 for all i, i ij = 0 for all
j, and ij = 0 for subclasses with nij = 0. From these equations one can obtain sums of
squares that mimic weighted squares of means. A solution to the restricted equations is
o
ao
bo
o

=
=
=
=

0,
(3.964, 2.286, 2.800, 2.067)0 .
(1.125, .285, 0)0 .
(.031, .411, .084, .464, .897, .411,
0, .486, .866, 0, .084, .950)0 .

Note that the solution to conforms to the restrictions imposed. Also note that this
solution is the same as the one previously obtained. Further,
ij = o + aoi + boj + ijo = yij.
for filled subclasses.
A test of hypothesis that the main effects are equal, that is
i. =
i0 for all pairs of i,
i , can be effected by taking a new solution to the restricted equations with ao dropped.
Then the SS for rows is
( o )0 RHS ( o )0 RHS ,
(25)
0

where o is a solution to the full set of equations, and this reduction is simply i j yij2 /nij. ,
o is a solution with a deleted from the set of equations, and RHS is the right hand side.
This tests a nontestable hypothesis inasmuch as the main effects are not estimable when
subclasses are missing. The test is valid only if ij are truly 0 for all missing subclasses,
and this is not a testable assumption, Henderson and McAllister (1978). If one is to use
a test based on non-estimable functions, as is done in this case, there should be some
attempt to evaluate the numerator with respect to quadratics in fixed effects other than
those being tested and use this in the denominator. That is, a minimum requirement
could seem to be a test of this sort.
P P

E(numerator) = Qt (a) + Q (fixed effects causing bias in the estimator)


+ linear functions of random variables.
Then the denominator should have the same expectation except that Qt (a), the quadratic
in fixed effects being tested, would not be present. In our example the reduction under
the full model with restrictions on is 579.03, and this is the same as the uncorrected
subclass sum of squares. A solution with restricted as before and with a dropped is
o = 0,
bo = (5.123, 4.250, 4.059, 2.790)0 ,
o = (.420, .129, .119, .430, .678, .129,
0, .549, 1.098, 0, .119, .979)0 .
This gives a reduction of 566.32. Then the sum of squares with 2 df for the numerator
is 579.03-566.32, but
e2 is not an appropriate denominator MS, when
e2 is the within
9

subclass mean square, unless 23 and 32 are truly equal to zero, and we cannot test this
assumption.
Similarly a solution when b is dropped is
o = 0,
ao = (5.089, 3.297, 3.741)0 ,
o = (.098, .254, .355, .003, 1.035, .254,
0, .781, 1.133, 0, .355, .778)0 .
The reduction is 554.81. Then if 23 and 32 = 0, the numerator sum of squares with 3 df
is 579.03-554.81. The sum of squares for interaction with (3-1)(4-1)-2 = 4 df. is 579.03
- reduction with and the Lagrange multiplier deleted. This latter reduction is 567.81
coming from a solution
o = 0,
ao = (3.930, 2.296, 2.915)0 , and
o = (2.118, 1.137, .323, 0)0 .

Biased Estimation By Ignoring

Another biased estimation method sometimes suggested is to ignore . That is, least
squares equations with only o , ao , bo are solved. This is sometimes called the method
of fitting constants, Yates (1934). This method has quite different properties than the
method of Section 17.4. Both obtain estimators of ij with expectations +ai +bj + linear
functions of ij . The method of section 17.4 minimizes the contribution of quadratics in
to MSE, but does a poor job of controlling on the contribution of e2 . In contrast,
the method of fitting constants minimizes the contribution of e2 but does not control
quadratics in . The method of the next section is a compromise between these two
extremes.
A solution for our example for the method of this section is
o = 0,
ao = (3.930, 2.296, 2.915)0 ,
bo = (2.118, 1.137, .323, 0)0 .
Then if we wish
ij these are o + aoi + boj .
A test of row effects often suggested is to compute the reduction in SS under the
model with dropped minus the reduction when a and are dropped, the latter being
P
2
/n.j. . Then this is tested against some denominator. If
e2 is used, the
simply j y.j.
10

denominator is too small unless is 0 . If R C for MS is used, the denominator


is probably too large. Further, the numerator is not a test of rows averaged in some
logical way across columns, but rather each row is averaged differently depending upon
the pattern of subclass numbers. That is, K0 is dependent upon the incidence matrix,
an obviously undesirable property.

Priors On Squares And Products Of

The methods of the two preceding sections control in the one case on and the other
on e2 as contributors to MSE. The method of this section is an attempt to control on
both. The logic of the method depends upon the assumption that there is no pattern of
values of , such, for example as linear by columns or linear by rows. Then consider the
matrix of squares and products of elements of ij for all possible permutations of rows
and columns. The average values are found to be
ij2
ij ij 0
ij i0 j
ij i0 j 0

=
=
=
=

.
/(c 1).
/(r 1).
/(r 1)(c 1).

(26)

for , this is the same matrix as that for V ar() in the


Note that if we substitute
finite random rows and finite random columns model. Then if we have estimates of e2
and or an estimate of the relative magnitudes of these parameters, we can proceed to
estimate with a and b regarded as fixed and regarded as a pseudo random variable.
We illustrate with our same numerical example. Assume that e2 = 20 and =
6. Write the least squares equations that include 23 and 32 , the missing subclasses.
Premultiply the last 12 equations by

6 2 2 2 3
1
1
1 3
1
1
1

6 2 2
1 3
1
1
1 3
1
1

6 2
1
1 3
1
1
1 3
1

6
1
1
1 3
1
1
1 3

6 2 2 2 3
1
1
1

6 2 2
1 3
1
1

6 2
1
1 3
1

6
1
1
1 3

6
2
2
2

6 2 2

6
2

(27)

Then add 1 to each of the last 12 diagonals. The resulting coefficient matrix is (17.28) . . .
(17.31). The right hand side vector is (3.05, 1.8, 1.5, 3.15, .85, .8, 2.6, .4, 1.8, -4.8, .95,
11

1.15, -2.25, .15, -3.55, -1.55, .45, 4.65)0 = (a1 a2 a3 b1 b2 b3 ). Thus and b4 are deleted,
which is equivalent to obtaining a solution with o = 0, bo4 = 0.
Upper left 9 9

.6
0
0
.25
.1
.15 .25
.1 .15
0
.55
0
.2
.1
0
0
0
0

0
0
.4
.15
0
.05
0
0
0

.25
.2 .15
.6
0
0 .25
0
0

.1
.1
0
0
.2
0
0
.1
0

.15
0 .05
0
0
.2
0
0 .15

.8 .25 .2
.45 .1 .25 2.5 .2 .3

.4
.15
.4 .15
.3 .25 .5 1.6 .3
0
.55
.2 .15 .1
.75 .5 .2 1.9

(28)

Upper right 9 9

.1
0
0 0
0
.2
.1 0
0
0
0 0
0
.2
0 0
0
0
.1 0
0
0
0 0
.2 .6
.1 0
.2
.2 .3 0
.2
.2
.1 0

0
0 0
0
.25
0 0
0
0
.15 0
.05
0
.15 0
0
0
0 0
0
0
0 0
.05
.25 .45 0
.05
.25
.15 0
.05
.25
.15 0 .15

0
0
.2
0
0
0
.2
.2
.2

(29)

Lower left 9 9

.4
.4
.2
0
.2
.4
.2
0
.2

.45
.5
.3
1.1
.9
.25
.15
.55
.45

.4 .15 .1 .25 .5 .2 .3
.2
0 .1
.2 .75
.1
.15

.4
0
.3
.2
.25 .3
.15

.2
0 .1 .6
.25
.1 .45

.4
0 .1
.2
.25
.1
.15

.4 .45
.2
.05 .75
.1
.15

.8
.15 .6
.05
.25 .3
.15

.4
.15
.2 .15
.25
.1 .45
.8
.15
.2
.05
.25
.1
.15

12

(30)

Lower right 9 9

1.6
.1
.1
.1
.3
.1
.1
.1
.3

.2
2.2
.4
.4
.4
.6
.2
.2
.2

.1
0 .75
.15
0
.05 .6
.2
0 .5 .45
0
.05
.2

1.6
0 .5
.15
0
.05
.2

.2 1.0 .5
.15
0 .15
.2

.2
0
2.5
.15
0
.05 .6

.1
0
.25
1.9
0 .1 .4

.3
0
.25 .3 1.0 .1 .4

.1
0
.25 .3
0
1.3 .4
.1
0 .75 .3
0 .1 2.2

(31)

The solution is
ao = (3.967, 2.312, 2.846)0 .
bo = (2.068, 1.111, .288, 0)0 .
o displayed as a table is
1
2
3
4
1 -.026 .230 .050 -.255
2 .614 -.230
0 -.384
3 -.588
0 -.050 .638
ij = aoi + boj + ijo . The same
Note that the ijo sum to 0 by rows and columns. Now the
solution can be obtained more easily by treating as a random variable with V ar = 12I.
rc
The value 12 comes from (r1)(c1)
6 = (3)4
(6) = 12. The resulting coefficient matrix
(2)3
(times 60) is in (17.32). The right hand side vector is (3.05, 1.8, 1.5, 3.15, .85, .8, 1.5, .55,
.65, .35, 1.05, .3, 0, .45, .6, 0, .15, .75)0 . and b4 are dropped as before.

36

0
33

0 15
0 12
24 9
36

6
6
0
0
12

9 15 6 9 6 0 0 0 0 0 0 0 0
0 0 0 0 0 12 6 0 15 0 0 0 0

3 0 0 0 0 0 0 0 0 9 0 3 12

0 15 0 0 0 12 0 0 0 9 0 0 0

0 0 6 0 0 0 6 0 0 0 0 0 0
12 0 0 9 0 0 0 0 0 0 0 3 0

(32)

diag (20,11,14,11,17,11,5,20,14,5,8,17)
The solution is the same as before. This is clearly an easier procedure than using the
equations of (17.28). The inverse of the matrix of (17.28) post-multiplied by
I 0
0 P
13

where P = the matrix of (17.27), is not the same as the inverse of the matrix of (17.32)
with diagonal G, but if we pre-multiply each of them by K0 and then post-multiply by
K, where K0 is the representation of ij in terms of a, b, , we obtain the same matrix,
which is the mean squared error for the
ij under the priors used, e2 = 20 and = 6.
Biased estimates of
ij are in both methods

6.009 5.308 4.305 3.712

4.994 3.192 2.600 1.928 .


4.327 3.957 3.084 3.484
The estimated MSE matrix of this vector is
Upper left 8 8

3.58

.32
8.34

.18
.15
6.01

.46
.28 .32 .87 .09
.64 .43 1.66 2.29 .32

.38
.02 .15
4.16
.04

7.64 .29 .64 1.77


.49

4.40
.43
1.93
.31

8.34
2.29
.32

33.72 1.54
3.63

Upper right 8 4

.33
.04
.33
.37
.34
.04
1.12
.25

.70 .55 .11


5.21 .45
.08
1.33
1.97 .24
1.47 1.14
.57
1.09 .06 .24
4.79
.45 .08
4.67
7.51 1.04
1.04 .13
.22

Lower right 4 4

5.66

2.62
33.60

1.00 .50
4.00 2.03

14.08 .73
4.44

Suppose we wish an approximate test of the hypothesis that


i. are equal. In this
0
case we could write K as
1 0 1 0 0 0 14 14 41 14 0 0 0 0
0 1 1 0 0 0 0 0 0 0 14 14 41 14
14

1
4
1
4

1
4
1
4

1
4
1
4

1
4
1
4

Then compute K0 CK, where C is either the g-inverse of (17.28) post-multiplied by


I 0
0 P

or the g-inverse of the matrix using diagonal G. This 2 2 matrix gives the MSE for e2 =
20, = 6. Finally premultiply the inverse of this matrix by (K0 o )0 and post-multiply by
K0 o . This quantity is distributed approximately as 22 under the null hypothesis.

Priors On Squares And Products Of a, b, And


Another possibility for biased estimation is to require only that
E(
ij ) = + linear functions of a, b, .

We do this by assuming prior values of squares and products of a and of b as

1
...
1
r1

1
r1

a2 and

1
...

1
c1

1
c1

2
,
b

respectively, where a2 and b2 are pseudo-variances. The prior on is the same as in


Section 17.6. Then we apply the method for singular G.
To illustrate in our example, let the priors be e2 = 20, a2 = 4, b2 = 9, 2 = 6. Then
we multiply all equations except the first pertaining to by
0
0
Pa a2

2
0
0
P

b b
2
0
0
P

and add 1s to all diagonals except the first. This yields the equations with coefficient
matrix in (17.33) . . . (17.36) and right hand vector = (6.35, 5.60, -1.90, -3.70, 18.75, -8.85,
-9.45, -.45, 2.60, .40, 1.80, -4.80, .95, 1.15, -2.25, .15, -3.55, -1.55, .45, 4.65)0 .
Upper left 10 10

1.55
.6
.55
.4
.6
.2
.2
.55
.25
.1

.5
3.4 1.1 .8
.3
.2
.5
.5
1.0
.4

.2 1.2
3.2 .8
0
.2 .4
.4 .5 .2

.7 1.2 1.1
2.6 .3 .4 .1
.1 .5 .2

2.55
1.2
.75
.6
6.4 .6 .6 1.65 2.25 .3

2.25 .6 .45 1.2 1.8 2.8 .6 1.65 .75


.9

2.25
0 1.65 .6 1.8 .6
2.8 1.65 .75 .3

1.95 .6
1.35
1.2 1.8 .6 .6
5.95 .75 .3

.35
.8 .25 .2
.45 .1 .25
.25
2.5 .2

.15 .4
.15
.4 .15
.3 .25
.25 .5 1.6
15

(33)

Upper right 10 10

.15
.6
.3
.3
.45
.45
1.35
.45
.3
.3

.1
.4
.2
.2
.3
.3
.3
.9
.2
.2

.2
.4
.8
.4
1.8
.6
.6
.6
.6
.2

.1
.2
.4
.2
.3
.9
.3
.3
.1
.3

0
.25
.15 0
.05
0 .5 .3 0 .1
0
1.0 .3 0 .1
0 .5
.6 0
.2
0 .75 1.35 0 .15
0 .75 .45 0 .15
0 .75 .45 0
.45
0 2.25 .45 0 .15
0
.25 .45 0
.05
0
.25
.15 0
.05

.2
.4
.4
.8
.6
.6
.6
1.8
.2
.2

(34)

Lower left 10 10

.75
0
.55
.2 .15 .1
.75
1.25 .4 .45 .4 .15 .1 .25
.1 .4
.5 .2
0 .1
.2
.3
.2 .3
.4
0
.3
.2
.9
0 1.1
.2
0 .1 .6
.7
.2
.9 .4
0 .1
.2
.25 .4 .25
.4 .45
.2
.05
.45
.2
.15 .8
.15 .6
.05
.15
0
.55 .4
.15
.2 .15
.55
.2 .45
.8
.15
.2
.05

.25 .5 .2

.75 .5 .2

.2 .75
.1

.2
.25 .3

.2
.25
.1

.6
.25
.1

.05 .75
.1

.05
.25 .3

.05
.25
.1

.15
.25
.1

(35)

Lower right 10 10

1.9 .2
.2
.1
0
.25
.15
0 .15
.2

.3 1.6
.2
.1
0 .75
.15
0
.05 .6

.15
.1 2.2 .2
0 .5 .45
0
.05
.2

.15
.1 .4 1.6
0 .5
.15
0
.05
.2

.45
.1 .4 .2 1.0 .5
.15
0 .15
.2

.15 .3 .4 .2
0
2.5
.15
0
.05 .6

.15
.1 .6
.1
0
.25
1.9
0 .1 .4

.15
.1
.2 .3
0
.25 .3 1.0 .1 .4

.45
.1
.2
.1
0
.25 .3
0
1.3 .4

.15 .3
.2
.1
0 .75 .3
0 .1 2.2

The solution is

= 4.014,
= (.650, .467, .183),
a

b = (.972, .120, .208, .885),

.111
.225 .170 .166
=
.303 .515

.
.489 .276
.599
.051 .133
.681
16

(36)

P
P
Note that i a
i = j bj = 0 and the ij sum to 0 by rows and columns. A solution
can be obtained by pretending that a, b, are random variables with V ar(a) = 3I,
V ar(b) = 8I, V ar() = 12I. The coefficient matrix of these is in (17.37) . . . (17.39) and
the right hand side is (6.35, 3.05, 1.8, 1.5, 3.15, .85, .8, 1.55, 1.5, .55, .65, .35, 1.05, .3, 0,
.45, .6, 0, .15, .75). The solution is

= 4.014,
= (.325, .233, .092)0 ,
a
= (.648, .080, .139, .590)0 ,
b

.760
.590
.086 .136
=
0 1.043

.580 .470
.
.367
0 .294
.295
This is a different solution from the one above, but the
ij are identical for the two. These
are as follows, in table form,

5.747 5.009 4.286 3.613

5.009 3.391 3.642 2.148 .


4.204 4.003 3.490 3.627
Note that

a
i =

= 0, but the ij do not sum to 0 by rows and columns.

j bj

X
j

X
i

ij = 2 a
i /a2 = 4 a
i .
ij = 2 bj /b2 = 1.5 bj .

The pseudo variances come from (15.18) and (15.19).


3
3
(4)
(6) = 3.
2
(2)3
4
4
=
(9)
(6) = 8.
3
(2)3
(3)4
=
(6) = 12.
(2)3

2
a
=
2
b
2

Upper left 8 8

1
120

186

72
112

66 48 72 24 24 66
0 0 30 12 18 12

106 0 24 12 0 30

88 18 0 6 24

87 0 0 0

39 0 0

39 0
81
17

(37)

Lower left 12 8 and (upper right 12 8)

1
120

30
12
18
12
24
12
0
30
18
0
6
24

30 0 0 30 0 0 0

12 0 0 0 12 0 0

18 0 0 0 0 18 0

12 0 0 0 0 0 12

0 24 0 24 0 0 0

0 12 0 0 12 0 0

0 0 0 0 0 0 0

0 30 0 0 0 0 30

0 0 18 18 0 0 0

0 0 0 0 0 0 0

0 0 6 0 0 6 0

0 0 24 0 0 0 24

(38)

Lower 12 12
= 1201 diag (40, 22, 28, 22, 34, 22, 10, 40, 28, 10, 16, 34)

(39)

Approximate tests of hypotheses can be effected as described in the previous section.


K0 for SS Rows is (times .25)
0 4 0 4 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 0 4 4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

K0 for SS columns is

0 0 0 0 3 0 0 3 1 0 0 1 1 0 0 1 1 0 0 1

0 0 0 0 0 3 0 3 0 1 0 1 0 1 0 1 0 1 0 1 /3.
0 0 0 0 0 0 3 3 0 0 1 1 0 0 1
1 0 0 1 1

The Two Way Mixed Model

The two way mixed model is one in which the elements of the rows (or columns) are a
random sample from some population of rows (or columns), and the levels of columns (or
rows) are fixed. We shall deal with random rows and fixed columns. There is really more
than one type of mixed model, as we shall see, depending upon the variance-covariance
matrices, V ar(a) and V ar(), and consequently V ar(), where = vector of elements,
+ ai + bj + ij . The most commonly used model is

V ar() =

C 0 0
0 C 0

.. ..
..
,
. .
.
0 0
C

18

(40)

where C is q q, q being the number of columns. There are p such blocks down the
diagonal, where p is the number of rows. C is a matrix with every diagonal = v and every
off-diagonal = c. If the rows were sires and the columns were traits and if V ar(e) = Ie2 ,
this would imply that the heritability is the same for every trait, 4 v/(4v + e2 ), and the
genetic correlation between any pair of traits is the same, c/v. This set of assumptions
should be questioned in most mixed models. Is it logical to assume that V ar(ij ) =
V ar(ij 0 ) and that Cov(ij , ik ) = Cov(ij , im )? Also is it logical to assume that
V ar(eijk ) = V ar(eij 0 k )? Further we cannot necessarily assume that ij is uncorrelated
with i0 j . This would not be true if the ith sire is related to the i0 sire. We shall deal more
specifically with these problems in the context of multiple trait evaluation.
Now let us consider what assumptions regarding
a

V ar

will lead to V ar() like (17.40). Two models commonly used in statistics accomplish this.
The first is based on the model for unrelated interactions and main effects formulated in
Section 15.4.
V ar(a) = Ia2 ,
since the number of levels of a in the population , and
V ar(ij ) = 2 .
Cov(ij , ij 0 ) = 2 /(q 1).
Cov(ij , i0 j ) = 2 /(one less than population levels of a) = 0.
Cov(ij , i0 j ) = 2 /(q 1) (one less than population levels of a) = 0.
This leads to

P 0

V ar() = 0 P
.. ..
. .

0 y2 ,
..
.

(41)

where P is a matrix with 10 s in diagonals and 1/(q 1) in all off-diagonals. Under this
model
V ar(ij ) = a2 + 2 .
Cov(ij , ij 0 ) = a2 2 /(q 1).

(42)

An equivalent model often used that is easier from a computational standpoint, but
less logical is
2
2
V ar(a ) = Ia
, where a
= a2 2 /(q 1).
2
2
V ar( ) = I
, where
= q2 /(q 1).

19

(43)

Note that we have re-labelled the row and interaction effects because these are not the
same variables as a and .
The results of (17.43) come from principles described in Section 15.9. We illustrate
these two models (and estimation and prediction methods) with our same two way example. Let
V ar(e) = 20I, V ar(a) = 4I, and

P 0 0

V ar() = 6
0 P 0 ,
0 0 P
where P is a 4 4 matrix with 1s for diagonals and 1/3 for all off-diagonals. We set up
the least squares equations with deleted, multiply the first 3 equations by 4 I3 and the
last 12 equations by V ar() described above. Then add 1 to the first 4 and the last 12
diagonal coefficients. This yields equations with coefficient matrix in (17.44) . . . (17.47).
The right hand side is (12.2, 7.2, 6.0, 3.15, .85, .8, 1.55, 5.9, -1.7, -.9, -3.3, 4.8, -1.2, -3.6,
0, 1.8, -3.0, -1.8, 3.0)0 .
Upper left 10 10

3.4
0
0 1.0
.4
.6
.4 1.0
.4
.6
0 3.2
0
.8
.4
0 1.0
0
0
0
0
0 2.6
.6
0
.2
.8
0
0
0
.25 .2 .15
.6
0
0
0 .25
0
0
.1 .1
0
0
.2
0
0
0
.1
0
.15
0 .05
0
0
.2
0
0
0 .15
.1 .25 .2
0
0
0 .55
0
0
0
.8
0
0 1.5 .2 .3 .2 2.5 .2 .3
.4
0
0 .5
.6 .3 .2 .5 1.6 .3
0
0
0 .5 .2
.9 .2 .5 .2 1.9

(44)

Upper right 10 9

.4 0 0 0
0
0 0
0 0

0 .8 .4 0 1.0
0 0
0 0

0 0 0 0
0 .6 0 .2 .8

0 .2 0 0
0 .15 0
0 0

0 0 .1 0
0
0 0
0 0

0 0 0 0
0
0 0 .05 0

.1 0 0 0 .25
0 0
0 .2

.2 0 0 0
0
0 0
0 0

.2 0 0 0
0
0 0
0 0

.2 0 0 0
0
0 0
0 0

20

(45)

Lower left 9 10

.4
0
0 .5
0
.5
0 1.2
0 .3
0 .4
0 1.1
0 .4
0
.9
0 .4
0
0
.4
.9
0
0 .8 .3
0
0 .4 .3
0
0
.8 .3

.2
.2
.6
.2
.2
0
0
0
0

.3
0
0
0
0
.1
.1
.3
.1

.6 .5 .2 .3
.5
0
0
0

.5
0
0
0

.5
0
0
0

1.5
0
0
0

.4
0
0
0

.4
0
0
0

.4
0
0
0
1.2
0
0
0

(46)

Lower right 9 9

1.6
0
0
0
0
0
0
0
0
0 2.2 .2
0 .5
0
0
0
0

0 .4 1.6
0 .5
0
0
0
0

0 .4 .2 1.0 .5
0
0
0
0

0 .4 .2
0 2.5
0
0
0
0

0
0
0
0
0 1.9
0 .1 .4

0
0
0
0
0 .3 1.0 .1 .4

0
0
0
0
0 .3
0 1.3 .4
0
0
0
0
0 .3
0 .1 2.2

(47)

The solution is
= (.563, .437, .126)0 .
a
= (5.140, 4.218, 3.712, 2.967)0 .
b

.104
.163 .096 .170
=
.219 .414

.421 .226
.
.524
.063 .122
.584
The ij sum to 0 by rows and columns.
When we employ the model with V ar(a ) = 2I and V ar( ) = 8I, the coefficient
matrix is in (17.48) . . . (17.50) and the right hand side is (3.05, 1.8, 1.5, 3.15, .85, .8, 1.55,
1.5, .55, .65, .35, 1.05, .3, 0, .45, .6, 0, .15, .75)0 .
Upper left 7 7

1.1

0
1.05

0 .25 .1 .15 .1

0 .2 .1
0 .25

.9 .15 0 .05 .2

.6 0
0
0

.2
0
0

.2
0

.55
21

(48)

Lower left 12 7 and (upper right 7 12)0

.25
0
0 .25 0
0
0

.1
0
0
0 .1
0
0

.15
0
0
0 0 .15
0

.1
0
0
0 0
0 .1

0 .2
0 .2 0
0
0

0 .1
0
0 .1
0
0

0
0
0
0 0
0
0

0 .25
0
0 0
0 .25

0
0 .15 .15 0
0
0

0
0
0
0 0
0
0

0
0 .05
0 0 .05
0

0
0 .2
0 0
0 .2

(49)

Lower right 12 12
= diag (.375, .225, .275, .225, .325, .225, .125, .375, .275, .125, .175, .325).

(50)

The solution is
= (.282, .219, .063)0 , different from above.
a
= (5.140, 4.218, 3.712, 2.967)0 , the same as before.
b

.385
.444
.185
.112
=
0 .632

.202 .444
,
.588
0 .185
.521
different from above. Now the sum to 0 by columns, but not by rows. This sum is
2
2

a
i /a
= 4
ai .

As we should expect, the predictions of subclass means are identical in the two solutions.
These are

5.807 4.945 4.179 3.360

5.124 3.555 3.493 2.116 .


4.490 4.155 3.463 3.425
These are all unbiased, including missing subclasses. This is in contrast to the situation
in which both rows and columns are fixed. Note, however, that we should not predict ij
except for j = 1, 2, 3, 4. We could predict ij (j=1,2,3,4) for i > 3, that is for rows not in
the sample. BLUP would be bj . Remember, that bj is BLUP of bj + the mean of all ai in
the infinite population, and ai is BLUP of ai minus the mean of all ai in the population.
We could, if we choose, obtain biased estimators and predictors by using some prior
on the squares and products of b, say

1
...
1
3

1
3

1
22

b2 ,

where b2 is a pseudo-variance.
Suppose we want to estimate the variances. In that case the model with
2
2
V ar(a ) = Ia
and V ar( ) = I

is obviously easier to deal with than the pedagogically more logical model with V ar()
2
2
not a diagonal matrix. If we want to use that model, we can estimate a
and
and
2
2
then by simple algebra convert those to estimates of a and .

23

Chapter 18
The Three Way Classification
C. R. Henderson
1984 - Guelph

This chapter deals with a 3 way classification model,


yijkm = + ai + bj + ck + abij + acik + bcjk + abcijk + eijkm .

(1)

We need to specify the distributional properties of the elements of this model.

The Three Way Fixed Model

We first illustrate a fixed model with V ar(e) = Ie2 . A simple way to approach this
model is to write it as
yijkm = ijk + eijkm .
(2)
Then BLUE of ijk is y ijk. provided nijk > 0. Also BLUE of
XXX
i

pijk ijk =

XXX
i

pijk y ijk. ,

where summation is over subclasses that are filled. But if subclasses are missing, there
may not be linear functions of interest to the experimenter. Analogous to the two-way
fixed model we have these definitions.
a effects
b effects
c effects
ab interactions
abc interactions

=
=
=
=
=

i.. ... ,
.j. ... ,
..k ... ,
ij. i.. .j. + ... ,
ijk ij. i.k .jk
+i.. + .j. + ..k ... .

(3)

None of these is estimable if a single subclass is missing. Consequently, the usual tests of
hypotheses cannot be effected exactly.

The Filled Subclass Case

Suppose we wish to test the hypotheses that a effects, b effects, c effects, ab interactions, ac interactions, bc interactions, and abc interactions are all zero where these are
defined as in (18.3). Three different methods will be described. The first two involve
setting up least squares equations reparameterized by
X

ai = b j = c k = 0

abij = 0 for all i, etc.

abcijk = 0 for all i, etc.

(4)

jk

We illustrate this with a 2 3 4 design with subclass numbers and totals as follows

b1
a
1
2

a c1
1 53
2 111

c1
3
7

c2
5
2

c3
2
5

b1
c2 c3
110 41
43 89

c4
6
1

c1
5
6

nijk
b2
c2 c3
2 1
2 4

b3
c4
4
3

c1
5
3

c2
2
4

yijk.
b2
c4 c1 c2 c3 c4 c1
118 91 31 9 55 96
9 95 26 61 35 52

c3
1
6

c4
1
1

b3
c2
31
55

c3 c4
8 12
97 10

The first 7 columns of X are

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1

0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1

1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1

0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

The first column pertains to , the second to a, the next two to b, and the last 3 to c. The
remaining 17 columns are formed by operations on columns 2-7. Column 8 is formed by
taking the products of corresponding elements of columns 2 and 3. Thus these are 1(1),
1(1), 1(1), 1(1), 1(0), . . ., -1(-1). The other columns are as follows: 9 = 2 4, 10 = 2 5,
11 = 2 6, 12 = 2 7, 13 = 3 5, 14 = 3 6, 15 = 3 7, 16 = 4 5, 17 = 4 6,
18 = 4 7, 19 = 2 13, 20 = 2 14, 21 = 2 15, 22 = 2 16, 23 = 2 17, 24 = 2 18.

This gives the following for columns 8-16 of X

1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1

0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1

1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1

0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1

0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1

1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1

0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1

0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1

0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1

and for columns 17-24 of X,

0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1

0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1

1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1

0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1

0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0

0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
1
1
0
0
1

0
0
0
0
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
1
0
1
0
1

0
0
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
1
1
0
0
1
1

Then the least squares coefficient matrix is X NX, where N is a diagonal matrix of nijk .
0
The right hand sides are X y., where y. is the vector of subclass totals. The coefficient
matrix of the equations is in (18.5) . . . (18.7). The right hand side is (1338, -28, 213, 42,
259, 57, 66, 137, 36, -149, -83, -320, -89, -38, -80, -30, -97, -103, -209, -16, -66, -66, 11,
19).

Upper left 12 12

81 7

81

8 4 13
1
3
6
2 9 5 17

6 2 9 5 17
8
4
13
1
3

54 23 3 4 5 4 5 11
0 3

50 2 7 7 5 8 4
1
1

45 16
16 11 4
3
6
6

33
16
0
1
6
7
6

.
35 3
1
6
6 5

54 23 3 4 5

50 2 7 7

45 16
16

33
16

35

(5)

Upper right 12 12 and (lower left 12 12)

3 4 5 2 7 7 11
0 3 4
1
1

11
0 3 4
1
1 3 4 5 2 7 7

9
4
5
6
4
5 7 4 13
2 2 5

6
4
5 10
1
3
2 2 5
0 3 9

7
5
5
8
5
5 1
5
5 2
1
1

5
6
5
5
3
5
5 10
5
1
3
1

.
5
5
5
5
5
3
5
5
7
1
1
3

7 4 13
2 2 5
9
4
5
6
4
5

2 2 5
0 3 9
6
4
5 10
1
3

1
5
5 2
1
1
7
5
5
8
5
5

5 10
5
1
3
1
5
6
5
5
3
5

5
5
7
1
1
3
5
5
5
5
5
3

(6)

Lower right 12 12

27

9
22

9 10
9 2
23 2
28

2
8
2
9
19

2
2
9
9
9
21

3
5
5 2
0
0

5
6
5 0 2
0

5
5 3 0
0 5

2
0
0 2
1
1

0 2
0 1 1
1

0
0 5 1
1 7

.
27
9
9 10
2
2

22
9 2
8
2

23 2
2
9

28
9
9

19
9

21
6

(7)

The resulting solution is (15.3392, .5761, 2.6596, -1.3142, 2.0092, 1.5358, -.8864, 1.3834,
-.4886, .4311, .2156, -2.5289, -3.2461, 2.2154, 2.0376, .9824, -1.3108, -1.0136, -1.4858,
-1.9251, 1.9193, .6648, .9469, -6836).
One method for finding the numerator sums of squares is to compare reductions,
that is, subtracting the reduction when each factor and interaction is deleted from the
reduction under the full model. For A, equation and unknown 2 is deleted, for B equations
3 and 4 are deleted, . . . , for ABC equations 19-24 are deleted. The reduction under the
full model is 22879.49 which is also simply
XXX
i

2
yijk.
/nijk .

The sums of squares with their d.f. are as follows.


d.f.
A
1
B
2
C
3
AB
2
AC
3
BC
6
ABC 6

SS
17.88
207.44
192.20
55.79
113.25
210.45
92.73

The denominator MS to use is


e2 = (y0 y - reduction in full model)/(81-24), where 81 is
n, and 24 is the rank of the full model coefficient matrix.
A second method, usually easier, is to compute for the numerator
SS = ( oi )0 (V ar( oi ))1 oi e2 .

(8)

o
o
for ABC.
, . . . , 24
oi is a subvector of the solution, 2o for A; 3o , 4o for B, . . . , 17
o
V ar( i ) is the corresponding diagonal block of the inverse of the 24 24 coefficient
matrix, not shown, multiplied by e2 . Thus

SS for A = .5761 (.0186)1 .5761,


SS for B = (2.6596 1.3142)

.0344 .0140
.0140
.0352

!1

2.6596
1.3142

etc. The terms inverted are diagonal blocks of the inverse of the coefficient matrix. These
give the same results as by the first method.
The third method is to compute
1

0 (V ar(K0i ))

(K0i )
7

e2 .
K0

(9)

is BLUE of , the vector of ijk , and


K0i = 0 is the hypothesis tested for the ith SS.
this is the vector of y ijk .
KA is the 2nd column of X.
KB is columns 3 and 4 of X.
..
.
KABC is the last 6 columns of X.
For example, K0B for SSB is
1 0 1
1 0 1
0 1 1 1 1 1

where 1 = (1 1 1 1) and 0 = (0 0 0 0).


2
1

V ar()/
e = N ,

where N is the diagonal matrix of nijk . Then


1 e2 = (K0 N1 K)1 .
V ar(K0 )
This method leads to the same sums of squares as the other 2 methods.

Missing Subclasses In The Fixed Model

When one or more subclasses is missing, the usual estimates and tests of main effects
and interactions cannot be made. If one is satisfied with estimating and testing functions
like K0 , where is the vector of ijk corresponding to filled subclasses, BLUE and exact
tests are straightforward. BLUE of
K0 = K0 y,

(10)

where y. is the vector of means of filled 3 way subclasses. The numerator SS for testing
the hypothesis that K0 = c is
(K0 y. c)0 V ar(K0 y.)1 (K0 y. c)e2 .

(11)

V ar(K0 y.)/e2 = K0 N1 K,

(12)

where N is a diagonal matrix of subclass numbers. The statistic of (18.11) is distributed


as central 2 e2 with d.f. equal to the number of linearly independent rows of K0 . Then
the corresponding MS divided by
e2 is distributed as F under the null hypothesis.
8

Unfortunately, if many subclasses are missing, the experimenter may have difficulty
in finding functions of interest to estimate and test. Most of them wish correctly or
otherwise to find estimates and tests that mimic the filled subclass case. Clearly this is
possible only if one is prepared to use biased estimators and approximate tests of the
functions whose estimators are biased.
We illustrate some biased methods with the following 2 3 4 example.

b1
a
1
2

c1
3
7

c2
5
2

c3
2
5

b1
a c1
1 53
2 111

c2 c3
110 41
43 89

c4
6
0

c1
5
6

nijk
b2
c2 c3
2 0
2 4

b3
c4
4
3

c1
5
3

c2
2
4

yijk.
b2
c4 c1 c2 c3 c4 c1
118 91 31 55 96
95 26 61 35 52

c3
0
6

c4
0
0

b3
c2
31
55

c3 c4

97

Note that 5 of the potential 24 abc subclasses are empty and one of the potential 12 bc
subclasses is empty. All ab and ac subclasses are filled. Some common procedures are
1. Estimate and test main effects pretending that no interactions exist.
2. Estimate and test main effects, ac interactions, and bc interactions pretending that bc
and abc interactions do not exist.
3. Estimate and test under a model in which interactions sum to 0 and in which each of
the 5 missing abc and the one missing bc interactions are assumed = 0.
All of these clearly are biased methods, and their goodness depends upon the
closeness of the assumptions to the truth. If one is prepared to use biased estimators,
it seems more logical to me to attempt to minimize mean squared errors by using prior
values for average sums of squares and products of interactions. Some possibilities for our
example are:
1. Priors on abc and bc, the interactions associated with missing subclasses.
2. Priors on all interactions.
3. Priors on all interactions and on all main effects.

Obviously there are many other possibilities, e.g. priors on c and all interactions.
The first method above might have the greatest appeal since it results in biases due
only to bc and abc interactions. No method for estimating main effects exists that does
not contain biases due to these. But the first method does avoid biases due to main
effects, ab, and ac interactions. This method will be illustrated. Let , a, b, c, ab, ac
be treated as fixed. Consequently we have much confounding among them. The rank of
the submatrix of X0 X pertaining to them is 1 + (2-1) + (3-1) + (4-1) + (2-1)(3-1) +
(2-1)(4-1) = 12. We set up least squares equations with ab, ac, bc, and abc including
missing subclasses for bc and abc. The submatrix for ab and ac has order, 14 and rank,
12. Treating bc and abc as random results in a mixed model coefficient matrix with
order 50, and rank 48. The OLS coefficient matrix is in (18.13) to (18.18). The upper 26
26 block is in (18.13) to (18.15), the upper right 26 24 block is in (18.16) to (18.17),
and the lower 24 24 block is in (18.18).

16

0 0
11 0
7

0
0
0
0
3
0
0
0
0
0
0
0
0

3
0
0
7
0
0
3
0
0
0
7
0
0

0
0
0
14

5
0
0
2
0
0
0
5
0
0
0
2
0

0
0
0
0
15

2
0
0
5
0
0
0
0
2
0
0
0
5

6
0
0
0
0
0
0
0
0
6
0
0
0

0
0
0
0
0
13

0
5
0
0
6
0
5
0
0
0
6
0
0

3
5
5
0
0
0
13

0
2
0
0
2
0
0
2
0
0
0
2
0

10

5
2
2
0
0
0
0
9

0
0
0
0
4
0
0
0
0
0
0
0
4

2
0
0
0
0
0
0
0
2

0
4
0
0
3
0
0
0
0
4
0
0
0

6
4
0
0
0
0
0
0
0
10

0
0
5
0
0
3
5
0
0
0
3
0
0

0
0
2
0
0
4
0
2
0
0
0
4
0

0
0
0
7
6
3
0
0
0
0
16

0
0
0
0
0
6
0
0
0
0
0
0
6

0
0
0
2
2
4
0
0
0
0
0
8

0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
5
4
6
0
0
0
0
0
0
15

(13)

(14)

3
0
0
0
0
0
3
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0

0 0 0
10 0 0
7 0
7

5
0
0
0
0
0
0
5
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0

2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0

6
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0

0
0
0
0
6

0
5
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0

0
0
0
0
0
11

0
2
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0

0
0
0
0
0
0
4

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

11

0
0
0
0
0
0
0
4

0
4
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
4
0
0
0
0

3
0
0
0
0
0
0
0
7

0
0
5
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0

0
0
0
0
0
0
0
0
0
8

0
0
2
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0

0
0
0
0
0
0
0
0
0
0
6

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
7
0
0
0
0
0
0
7
0
0
0
7
0
0
0
0
0
0
0
0
0
0
0

(15)

(16)

0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
2
0
0
0
0
0
0
0
0
0
0

0
0
0
5
0
0
0
0
0
0
0
0
5
0
0
0
5
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
6
0
0
0
0
0
6
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0

0
0
0
0
2
0
0
0
0
0
0
2
0
0
0
0
0
0
0
2
0
0
0
0
0
0

0
0
0
0
4
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
4
0
0
0
0
0

0
0
0
0
3
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
3
0
0
0
0

0
0
0
0
0
3
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
3
0
0
0

0
0
0
0
0
4
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
4
0
0

0
0
0
0
0
6
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
6
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

diag(3, 5, 2, 6, 5, 2, 0, 4, 5, 2, 0, 0, 7, 2, 5, 0, 6, 2, 4, 3, 3, 4, 6, 0).

(17)

(18)

The right hand side is


[322, 177, 127, 243, 217, 204, 240, 172, 41, 173, 258,
124, 247, 35, 164, 153, 130, 118, 186, 57, 61, 90,
148, 86, 97,
0, 53, 110, 41, 118, 91, 31,
0,
55, 96, 31,
0,
0, 111, 43, 89, 0, 95, 26,
61, 35, 52, 55, 97,
0]
2
We use the diagonalization method and assume that the pseudo-variances are bc
= .3
2
2
2
1
1
e , abc = .6 e . Accordingly we add .3 to the 15-26 diagonals and .6 to the 27-50
diagonals of the OLS equations. This gives the following solution

ab = (20.664, 16.724, 17.812, 0, 3.507, 2.487)0


ac = (.047, .618, 0, 1.949, 18.268, 17.976, 18.401, 15.441)0
bc = (1.132, 1.028, .164, .268, .541, .366, .093, .268,
.591, .662, .071, 0)0
12

abc = (1.229, .694, 0, .535, .666, .131, 0, .535, .563,


.563, 0, 0, 1.034, 1.362, .328, 0, .416, .601,
.186, 0, .618, .760, .142, 0)0 .
The biased estimator of ijk is aboij + acoik + bcojk + abcoijk . These are in tabular form ordered
c in b in a by rows.

K=

1
8

1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1

0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
1
1
1
1

=
,

18.35
21.77
20.50
19.52
17.98
15.61
16.82
13.97
19.01
15.97
17.88
15.86
16.10
20.37
17.91
15.71
15.72
13.50
15.17
11.67
16.99
14.07
16.13
12.95

The variance-covariance matrix of these


ijk is XCX e2 , where X is the 24 x 50 incidence
matrix for y ijk , and C is a g-inverse of the mixed model coefficient matrix. Approximate
tests of hypotheses of K0 = c can be effected by computing
0

c)0 [K0 XCX K]1 (K0


c)/(rank (K0 X)
(K0
e2 ).
Under the null hypothesis this is distributed approximately as F .
for this test
To illustrate suppose we wish to test that all .j. are equal. K0 and
0
0
2
= (2.66966 1.05379) . The pseudo-variances, bc
are shown above and c = 0. K
2
2
0
and abc , could be estimated quite easily by Method 3. One could estimate e by y y reduction under full model, and this is simply
y0 y

X X X
i

13

2
yijk.
/nijk .

Then we divide by n - the number of filled subclasses. Three reductions are needed to
2
2
estimate bc
and abc
. The easiest ones are probably
Red (full model) described above.
Red (ab,ac,bc).
Red (ab,ac).
Partition the OLS coefficient matrix as
(C1 C2 C3 ).
C1 represents the first 14 cols., C2 the next 12, and C3 the last 24. Then compute C2 C02
and C3 C03 . Let Q2 be the g-inverse of the matrix for Red (ab,ac,bc), which is the LS
coefficient matrix with rows (and cols.) 27-50 set to 0. Q3 is the g-inverse for Red (ab,ac),
which is the LS coefficient matrix with rows (and cols.) 15-50 set to 0. Then
2
2
E[Red (full)] = 19e2 + n(bc
+ abc
) + t,
2
2
2
E[Red (ab, ac, bc)] = 17e + nbc + trQ2 C3 C03 abc
+ t,
2
0 2
2
E[Red (ab, ac) = 12e + trQ3 C2 C2 bc + trQ3 C3 C03 abc
+ t.

t is a quadratic in the fixed effects. The coefficient of e2 is in each case the rank of the
coefficient matrix used in the reduction.

The Three Way Mixed Model

Mixed models could be of two general types, namely one factor fixed and two random
such as a fixed and b and c random, or with two factors fixed and one factor random, e.g.
a and b fixed with c random. In either of these we would need to consider whether the
populations are finite or infinite and whether the elements are related in any way. With
a and b fixed and c random we would have fixed ab interaction and random ac, bc, abc
interactions. With a fixed and b and c random all interactions would be random.
We also need to be careful about what we can estimate and predict. With a fixed
and b and c random we can predict elements of ab, ac, and abc only for the levels of
a in the experiment. With a and b fixed we can predict elements of ac, bc, abc only
for the levels of both a and b in the experiment. For infinite populations of b and c in
the first case and c in the second we can predict for levels of b and c (or c) outside the
experiment. BLUP of them is 0. Thus in the case with c random, a and b fixed, BLUP
of the 1,2,20 subclass when the number of levels of c in the experiment <20, is
o + ao1 + bo2 + abo12 .
In contrast, if the number of levels of c in the experiment >19, BLUP is
14

o + ao1 + bo2 + c20 + a


c1,20 + bc2,20 + abo12 + abc12,20 .
In the case with a, b fixed and c random, we might choose to place a prior on ab,
especially if ab subclasses are missing in the data. The easiest way to do this would be
2
to treat ab as a pseudo random variable with variance = Iab
, which could be estimated.
We could also use priors on a and b if we choose, and then the mixed model equations
would mimic the 3 way random model.

15

Chapter 19
Nested Classifications
C. R. Henderson
1984 - Guelph

The nested classification can be described as cross-classification with disconnectedness. For example, we could have a cross-classified design with the main factors being
sires and dams. Often the design is such that a set of dams is mated to sire 1 a second
2
set to sire 2, etc. Then d2 and ds
, dams assumed random, cannot be estimated sepa2
rately, and the sum of these is defined as d/s
. As is the case with cross-classified data,
estimability and methods of analysis depend upon what factors are fixed versus random.
We assume that the only possibilities are random within random, random within fixed,
and fixed within fixed. Fixed within random is regarded as impossible from a sampling
viewpoint.

Two Way Fixed Within Fixed


A linear model for fixed effects nested within fixed effects is
yijk = ti + aij + eijk

with ti and aij fixed. The j subscript has no meaning except in association with some i
subscript. None of the ti is estimable nor are differences among the ti . So far as the aij
are concerned
X
X

a
for
j = 0 can be estimated.
j
ij
j
j
Thus we can estimate 2ai1 ai2 ai3 . In contrast it is not possible to estimate differences
between aij and agh (i 6= g) or between aij and agh (i 6= g, j 6= h). Obviously main effects
can be defined only as some averaging over the nested factors. Thus we could define the
P
P
mean of the ith main factor as i = ti + j kj aij where j kj = 1. Then the ith main
effect would be defined as i -
. Tests of hypotheses of estimable linear functions can
be effected in the usual way, that is, by utilizing the variance-covariance matrix of the
estimable functions.
Let us illustrate with the following simple example

t a nij yij yij.


1 1
4 20 5
2
5 15 3
2 3
1
8 8
4 10 70 7
5
2 12 6
3 6
5 45 9
7
2 16 8
Assume that V ar(e) = Ie2 .
Main effects
1
2
3

i
V ar(
i )
2
1
1
4. e (4 + 5 )/4 = .1125 e2
7. e2 (1 + 101 + 21 )/9 = .177 e2
8.5 e2 (51 + 21 )/4 = .175 e2

Test
1 0 1
0 1 1

1 0 1
0 1 1

= 0
3.5
.5

=
e2 Var ()
e2 =
V ar(K0 )

dg (.1125, .177 . . . , .175)


!
.2875 .175
.175 .35278

with inverse
4.98283 2.47180
2.47180
4.06081

Then the numerator MS is


4.98283 2.47180
2.47180
4.06081

(3.5 .5)

3.5
.5

/2 = 26.70

Estimate e2 as
e2 = within subclass mean square. Then the test is numerator MS/
e2
with 2,26 d.f. A possible test of differences among aij could be

1 1 0 0
0 0
0
0
0 1 0 1 0
0
0
0 0 1 1 0
0
0
0 0 0
0 1 1

a,

the estimate of which is (2 2 1 1) with


.45 0 0 0
0 1.5 .5 0
0 .5 .6 0
0
0 0 .7

V ar =

the inverse of which is

2.22222
0
0
0
0
.92308 .76923
0
0
.76923 2.30769
0
0
0
0
1.42857

This gives the numerator MS = 13.24.


The usual ANOVA described in many text books is as follows.
2
2
yi..
/ni. y...
/n.. .

S.S. for T =

S.S. for A =

XX

i
i

2
yij.
/nij

X
i

2
yi..
/ni. .

In our example,
MST = (1290.759 1192.966)/2 = 48.897.
MSA = (1304 1290.759)/1 = 13.24.
Note that the latter is the same as in the previous method. They do in fact test the same
hypothesis. But MST is different from the result above which tests treatments averaged
equally over the a nested within it. The second method tests differences among t weighted
over a according to the number of observations. Thus the weights for t1 are (4,5)/9.
To illustrate this test,
0

K =

.444444 .555555
0
0
0
.71429 .28572
0
0
.07692 .76923 .15385 .71429 .28572
0

V ar(K

yij )/e2

with inverse =

.253968 .142857
.142857 .219780
6.20690 4.03448
4.03448
7.17242

Then the MS is
(4.82540 1.79122)

6.20690 4.03448
4.03448
7.17242

4.82540
1.79122

/2 = 48.897

as in the regular ANOVA. Thus ANOVA weights according to the nij . This does not
appear to be a particularly interesting test.
3

Two Way Random Within Fixed

There are two different sampling schemes that can be envisioned in the random nested
within fixed model. In one case, the random elements associated with every fixed factor
are assumed to be a sample from the same population. A different situation is one in
which the elements within each fixed factor are assumed to be from separate populations.
The first type could involve treatments as the fixed factors and then a random sample of
sires is drawn from a common population to assign to a particular treatment. In contrast,
if the main factors are breeds, then the sires sampled would be from separate populations,
namely the particular breeds. In the first design we can estimate the difference among
treatments, each averaged over the same population of sires. In the second case we would
compare breeds defined as the average of all sires in each of the respective breeds.

2.1

Sires within treatments


We illustrate this design with a simple example
nij
Treatments
Sires 1 2
3
1
5 0
0
2
2 0
0
3
0 3
0
4
0 8
0
5
0 0
5

yij.
Treatments
1 2
3
7 6 - 7
- 9
- 8

Let us treat this first as a multiple trait problem with V ar(e) = 40I,

si1
3 2 1

V ar si2 = 2 4 2
,
si3
1 2 5
where sij refers to the value of the ith sire with respect to the j th treatment. Assume that
the sires are unrelated. The inverse is

3 2 1

2 4 2
1 2 5

.5
.25
0

= .25 .4375 .125


.
0
.125 .25

Then the mixed model equations are (19.1).

1
80

50 20
0
0
0
0
0
0
0
0
20
35 10
0
0
0
0
0
0
0
0 10
20
0
0
0
0
0
0
0
0
0
0
44 20
0
0
0
0
0
0
0
0 20
35 10
0
0
0
0
0
0
0
0 10
20
0
0
0
0
0
0
0
0
0
0
40 20
0
0
0
0
0
0
0
0 20
41 10
0
0
0
0
0
0
0
0 10
20
0
0
0
0
0
0
0
0
0
0
40
0
0
0
0
0
0
0
0
0 20
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
10
0
0
4
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 10 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 4 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 0 0
0
0
0
0
0 0 6 0
0
0
0
0
0 0 0 0
20
0
0
0
0 0 0 0
51 10
0
0
0 0 16 0
10
20
0
0
0 0 0 0
0
0
40 20
0 0 0 0
0
0 20
35 10 0 0 0
0
0
0 10
30 0 0 10
0
0
0
0
0 14 0 0
16
0
0
0
0 0 22 0
0
0
0
0
10 0 0 10
(s11 , s12 , s13 , s21 , s22 , s23 , s31 , s32 , s33 ,
s41 , s42 , s43 , s51 , s52 , s53 , t1 , t2 , t3 )0
= [.175, 0, 0, .15, 0, 0, 0, .175, 0, 0, .225,
0, 0, 0, .2, .325, .4, .2]0 .
5

(1)

The solution is
(.1412, .0941, .0471, .1412, .0941, .0471, .0918, .1835,
.0918, .0918, .1835, .0918, 0, 0, 0, 1.9176, 1.5380, 1.600)0 .

(2)

Now if we treat this as a nested model, G = diag (3,3,4,4,5). Then the mixed model
equations are in (19.3).

1201

55

0
46

0
0
39

0
0
0
54

0 15 0 0

0 6 0 0

0 0 9 0

0 0 24 0

39 0 0 15

21 0 0

33 0
15

s1
s2
s3
s4
s5
t1
t2
t3

1201

21
18
21
27
24
39
48
24

(3)

The solution is
(.1412, .1412, .1835, .1835, 0, 1.9176, 1.5380, 1.6000)0 .

(4)

Note that s11 = s1 , s21 = s2 , s32 = s3 , s42 = s4 , s53 = s5 from the solution in
(19.2) and (19.4). Also note that tj are equal in the two solutions. The second method
is certainly easier than the first but it does not predict values of sires for treatments in
which they had no progeny.

2.2

Sires within breeds

Now we assume that we have a population of sires unique to each breed. Then the
first model of Section 19.2.1 would be useless. The second method illustrated would be
appropriate if sires were unrelated and s2 = 3,4,5 for the 3 breeds. If s2 were the same
for all breeds G = I5 s2 .

Random Within Random


Let us illustrate this model by dams within sires. Suppose the model is
yijk = + si + dij + eijk .

s
Is2 0
0

Id2 0
V ar
d = 0
.
2
e
0
0 Ie

Let us use the data of Section 19.2.1 but now let t refer to sires and s to dams. Suppose
e2 /s2 = 12, e2 /d2 = 10. Then the mixed model equations are in (19.5).

23

7 11
19 0
23

5
0
0
17

5
5
0
0
15

2
2
0
0
0
12

3
0
3
0
0
0
13

8
0
8
0
0
0
0
18

5
0
0
5
0
0
0
0
15

1
s1
s2
s3
d1
d2
d3
d4
d5

37
13
16
8
7
6
7
9
8

(5)

The solution is (l.6869, .0725, -.0536, -.0l89, -.ll98, .2068, .l6l6, -.2259, -.0227)0 . Note that
P
si = 0 and that 10 (Sum of d within ith sire)/12 = si .

Chapter 20
Analysis of Regression Models
C. R. Henderson
1984 - Guelph

A regression model is one in which Zu does not exist, the first column of X is a vector
of 1s, and all other elements of X are general (not 0s and 1s) as in the classification
model. The elements of X other than the first column are commonly called covariates
or independent variables. The latter is not a desirable description since they are not
variables but rather are constants. In hypothetical repeated sampling the value of X
remains constant. In contrast e is a sample from a multivariate population with mean
= 0 and variance = R, often Ie2 . Accordingly e varies from one hypothetical sample
to the next. It is usually assumed that the columns of X are linearly independent, that
is, X has full column rank. This should not be taken for granted in all situations, for
it could happen that linear dependencies exist. A more common problem is that near
but not complete dependencies exist. In that case, (X0 R1 X)1 can be quite inaccurate,
can be extremely large. Methods for
and the variance of some or all of the elements of
dealing with this problem are discussed in Section 20.2.

Simple Regression Model


The most simple regression model is
yi = + wi + e i ,

where

X =

1
1
..
.

w1
w2
..
.

1 wn
The most simple form of V ar(e) = R is Ie2 . Then the BLUE equations are
n w.
P 2
w.
wi

y.
P

wi yi

To illustrate suppose n=5,


w0 = (6, 5, 3, 4, 2), y0 = (8, 6, 5, 6, 5).
1

(1)

The BLUE equations are


1
e2

5 20
20 90

30
127

/e2 .

The inverse of the coefficient matrix is


1.8 .4
.4
.1

e2 .

The solution is (3.2, .7).


, ) = .4 e2 .
) = .1 e2 , Cov(
V ar(
) = 1.8 e2 , V ar(
Some text books describe the model above as
yi = + (w w. ) + ei .
The BLUE equations in this case are
n
0
P
0
(wi w. )2

y.

(wi w. )
yi

(2)

This gives the same solution to as (19.1) but


6=
except when w. = 0. The equations
of (20.2) in our example are
1
e2

5 0
0 10

30
7

/e2 .

= 6, = .7.
) = .1e2 , Cov(
, ) = 0.
V ar(
) = .2e2 , V ar(
It is easy to verify that
=
w. . . These two alternative models meet the requirements
of linear equivalence, Section 1.5.
BLUP of a future y say y0 with wi = w0 is

+ w0 + e0 or
+ (w0 w)
+ e0 ,
where e0 is BLUP of e0 = 0, with prediction error variance, e2 . If w0 = 3, y0 would
be 5.3 in our example. This result assumes that future or ) have the same value as in
the population from which the original sample was taken. The prediction error variance
is
!
!
1.8 .4
1
(1 3)
e2 + e2 = 1.3 e2 .
.4
.1
3
Also using the second model it is
(1 1)

.2 0
0 .1

1
1

as in the equivalent model.


2

e2 + e2 = 1.3 e2

Multiple Regression Model

In the multiple regression model the first column of X is a vector of 1s, and there
are 2 or more additional columns of covariates. For example, the second column could
represent age in days and the third column could represent initial weight, while y represents final weight. Note that in this model the regression on age is asserted to be the same
for every initial weight. Is this a reasonable assumption? Probably it is not. A possible
modification of the model to account for effect of initial weight upon the regression of final
weight on age and for effect of age upon the regression of final weight on initial weight is

where

yi = + 1 w1 + 2 w2 + 3 w3 + ei ,
w3 = w1 w2 .

This model implies that the regression coefficient for y on w1 is a simple linear function
of w2 and the regression coefficient for y on w2 is a simple linear function of w1 . A model
like this sometimes gives trouble because of the relationship between columns 2 and 3
with column 4 of X . We illustrate with

X =

1
1
1
1
1

6
5
5
6
7

8
9
8
7
9

48
45
40
42
63

The elements of column 4 are the products of the corresponding elements of columns 2
and 3. The coefficient matrix is

29 41
171 238
339

238
1406
1970
11662

(3)

The inverse of this is


4780.27 801.54 548.45
91.73

135.09
91.91 15.45

63.10 10.55
1.773

(4)

Suppose that we wish to predict y for w1 = w1. = 5.8, w2 = 8.2, w3 = 47.56 =


(5.8)(8.2). The variance of the error of prediction is

(1 5.8 8.2 47.56)(matrix 20.4)

1
5.8
8.2
47.56

2
e

+ e2 = 1.203 e2

Suppose we predict y for w1 = 3, w2 = 5, w3 = 15. Then the variance of the error


of prediction is 215.77 e2 , a substantial increase. The variance of the prediction error is
extremely vulnerable to departures of wi from w
i .
Suppose we had not included w3 in the model. Then the inverse of the coefficient
matrix is

33.974 1.872 2.795

.359 .026

.
.359
The variances of the errors of prediction of the two predictors above would then be 1.20
and 7.23, the second of which is much smaller than when w3 is included. But if w3 6= 0,
the predictor is biased when w3 is not included.
Let us look at the solution when w3 is included and y0 = (6, 4, 8, 7, 5). The solution
is
(157.82, 23.64, 17.36, 2.68).
This is a strange solution that is the consequence of the large elements in (X0 X)1 . A
better solution might result if a prior is placed on w3 . When the prior is 1, we add 1 to
the lower diagonal element of the coefficient matrix. The resulting solution is
(69.10, 8.69, 7.16,

.967).

This type of solution is similar to ridge regression, Hoerl and Kennard (1970). There
is an extensive statistics literature on the problem of ill-behaved X0 X. Most solutions
to this problem that have been proposed are (1) biased (shrunken estimation) or (2)
dropping one or more elements of from the model with either backward or forward
type of elimination, Draper and Smith (1966). See for example a paper by Dempster et
al. (1977) with an extensive list of references. Also Hocking (1976) has many references.
Another type of covariate is involved in fitting polynomials, for example
yi = + xi 1 + x2i 2 + x3i 3 + x4i 4 + ei .
As in the case when covariates involve products, the sampling variances of predictors are
large when xi departs far from x. The numerator mean square with 1 d.f. can be computed
easily. For the ith i it is
i2 /ci+1 ,
where ci+1 is the i+1 diagonal of the inverse of the coefficient matrix. The numerator
can also be computed by reduction under the full model minus the reduction when i is
dropped from the solution.

Chapter 21
Analysis of Covariance Model
C. R. Henderson
1984 - Guelph

A covariance model is one in which X has columns referring to levels of factors or


interactions and one or more columns of covariates. The model may or may not contain
Zu. It usually does not in text book discussions of covariance models, but in animal
breeding applications there would be, or at least should be, a u vector, usually referring
to breeding values.

Two Way Fixed Model With Two Covariates

Consider a model
yijk = ri + cj + ij + w1ijk 1 + w2ijk 2 + eijk .
All elements of the model are fixed except for e, which is assumed to have variance, Ie2 .
The nijk , yijk. , w1ijk. , and w2ijk. are as follows

1
2
3

1
3
1
2

nijk
2
2
3
1

3
1
4
2

yijk.
2 3
9 4
20 24
7 8

1
20
3
13

1
8
6
7

w1ijk.
2 3
7 2
10 11
2 4

1
12
5
9

and the necessary sums of squares and crossproducts are


X X X
i

X X X
i

w1ijk w2ijk = 264,

X X X
i

X X X
i

X X X
i

2
w1ijk
= 209,

2
w2ijk
= 373,

w1ijk yijk = 321,


w2ijk yijk = 433.

w2ijk.
2
11
15
2

3
4
14
7

Then the matrix of coefficients of OLS equations are in (21.1). The right hand side vector
is (33, 47, 28, 36, 36, 36, 20, 9, 4, 3, 20, 24, 13, 7, 8, 321, 433).

6 0 0 3

8 0 1

5 2

2
3
1
0
6

1
4
2
0
0
7

3
0
0
3
0
0
3

2
0
0
0
2
0
0
2

1
0
0
0
0
1
0
0
1

0
1
0
1
0
0
0
0
0
1

0
3
0
0
3
0
0
0
0
0
3

0
4
0
0
0
4
0
0
0
0
0
4

0
0
2
2
0
0
0
0
0
0
0
0
2

0
0
1
0
1
0
0
0
0
0
0
0
0
1

0
0
2
0
0
2
0
0
0
0
0
0
0
0
2

17 27
27 34
13 18
21 26
19 28
17 25
8 12
7 11
2
4
6
5
10 15
11 14
7
9
2
2
4
7
209 264
373

(1)

A g-inverse of the coefficient matrix can be obtained by taking a regular inverse with the
first 6 rows and columns set to 0. The lower 11 11 submatrix of the g-inverse is in
(21.2).

4
10

8685

7311
15009

5162
7135
15313

7449
9843
5849
25714

2866
3832
2430
5325
3582
2780
3551
11869

6690
9139
6452
9312
11695

4588
6309
4592
5718
5735
4012
5158
2290
9016

4802
6507
4422
7519
6002
6938

285
264
226
2401
356
569
704
654
6
767

6163
8357
5694
9581
7704
5686
12286
1148
1652
1441
262
1435
821
1072
281
1151
440
580

(2)

This gives a solution vector (0, 0, 0, 0, 0, 0, 7.9873, 6.4294, 5.7748, 2.8341, 8.3174, 6.8717,
7.6451, 7.2061, 5.3826, .6813, -.7843). One can test an hypothesis concerning interactions
by subtracting from the reduction under the full model the reduction when is dropped
from the model. This tests that all ij i. .j + .. are 0. The reduction under the full
model is 652.441. A solution with dropped is
(6.5808, 7.2026, 6.5141, 1.4134, 1.4386, 0, .1393, .5915).
This gives a reduction = 629.353. Then the numerator SS with 4 d.f. is 652.441 - 629.353.
The usual test of hypothesis concerning rows is that all ri + c. + i. are equal. This is
comparable to the test effected by weighted squares of means when there are no covariates.
We could define the test as all ri + c. + i. + 1 w10 + 2 w20 are equal, where w10 , w20 can
have any values. This is not valid, as shown in Section 16.6, when the regressions are not
homogeneous. To find the numerator SS with 2 d.f. for rows take the matrix
0

K
0

=
K

1 1 1 1 1 1
0
0
0
1 1 1
0
0
0 1 1 1
2.1683
.0424

is the solution under the full model with ro , co set to 0. Next compute K0 [first
where
9 rows and columns of (21.2)] K as
=

4.5929 2.2362
2.2362 4.3730

Then
numerator SS = (2.1683 .0424)

4.5929 2.2362
2.2362 4.3730

!1

2.1683
.0424

= 1.3908.
If we wish to test w1 , compute as the numerator SS, with 1 d.f., .6813 (.0767)1 .6813,
where

1 = .6813, V ar(
1 ) = .0767 e2 .

Two Way Fixed Model With Missing Subclasses

We found in Section 17.3 that the two way fixed model with interaction and with one
or more missing subclasses precludes obtaining the usual estimates and tests of main
effects and interactions. This is true also, of course, in the covariance model with missing
subclasses for fixed by fixed classifications. We illustrate with the same example as before
3

except that the (3,3) subclass is missing. The OLS equations are in (21.3). The right
hand side vector is (33, 47, 20, 36, 36, 28, 20, 9, 4, 3, 20, 24, l3, 7, 0, 307, 406)0 . Note
that the equation for 33 is included even though the subclass is missing.

6 0 0 3

8 0 1

3 2

2
3
1
0
6

1
4
0
0
0
5

3
0
0
3
0
0
3

2
0
0
0
2
0
0
2

1
0
0
0
0
1
0
0
1

0
1
0
1
0
0
0
0
0
1

0
3
0
0
3
0
0
0
0
0
3

0
4
0
0
0
4
0
0
0
0
0
4

0
0
2
2
0
0
0
0
0
0
0
0
2

0
0
1
0
1
0
0
0
0
0
0
0
0
1

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

17 27
27 34
9 11
21 26
19 28
13 18
8 12
7 11
2
4
6
5
10 15
11 14
7
9
2
2
0
0
199 249
348

(3)

We use these equations to estimate a pseudo-variance, 2 to use in biased estimation with


priors on . We use Method 3. Reductions and expectations are
y0 y = 638, E(y0 y) = 17 e2 + 17 2 + q.
Red (full) = 622.111, E() = 10 e2 + 17 2 + q.
Red (r, c, ) = 599.534, E() = 7 e2 + 12.6121 2 + q.
q = a quadratic in r, c, .
Solving we get
e2 = 2.26985,
2 = 3.59328 or a ratio of .632. Then we add .632 to each
of the diagonal coefficients corresponding to equations in (21.3). A resulting solution is
(6.6338, 6.1454, 7.3150, .3217, .6457, 0, 1.3247, .7287, .5960,
1.7830, 1.1870, .5960, .4583, .4583, 0, .6179, .7242)
The resulting biased estimates of ri + cj + ij given w1 = w2 = 0 are

7.6368 6.5509 6.0378

4.0407 7.9781 6.7414


7.4516 7.5024 7.3150

(4)

The matrix of estimated mean squared errors obtained by pre and post multiplying
4

a g-inverse of the coefficient matrix by

1 0 0 1 0 0 1 0 0 0 0 0 0 0

..
0

L =
.
0 0 1 0 0 1 0 0 0 0 0 0 0 0

0 0 0

..

1 0 0

is in (21.5).

1
10, 000

8778

7287
13449

5196
6769
13215

8100
8908
4831
22170

6720
8661
5606
9676
11201

4895
6017
4509
7524
6007
6846

6151
7210
4931
9408
7090
5423
11244

3290
4733
2927
4825
4505
3214
4681
10880

2045
2788
7191
1080
2540
3885
5675
6514
45120

(5)

To test that all ri + c. + i. are equal, use the matrix


1 1 1 1 1 1
0
0
0
1 1 1
0
0
0 1 1 1

with (21.4) and (21.5).

Then the numerator SS is


(1.4652 2.0434)

4.4081 1.6404
9.2400

!1

1.4652
2.0434

= 1.2636.

e2 = 2 /e2 . Further, the


The test is approximate because the MSE depends upon
2 /
numerator is not distributed as 2 .

Covariates All Equal At The Same Level Of A Factor

In some applications every wij = wi in a one-way covariate model,


yij = + ti + wij + eij
with all wij = wi . For example, ti might represent an animal in which there are several
observations, yij , but the covariate is measured only once. This idea can be extended to
multiple classifications. When the factor associated with the constant covariate is fixed,
estimability problems exist, Henderson and Henderson (1979). In the one way case ti tj
is not estimable and neither is .
5

We illustrate with a one-way case in which ni = (3,2,4), wi = (2,4,5), yi. = (6,5,10).


The OLS equations are

9
3
2
4
34

3
3
0
0
6

2 4 34
0 0
6
2 0
8
0 4 20
8 20 144

t1
t2
t3

68
18
10
40
276

(6)

Note that equations 2,3,4 sum to equation 1 and also (2 4 5) times these equations gives
the last equation. Accordingly the coefficient matrix has rank only 3, the same as if there
were no covariate. A solution is (0,6,5,10,0).
If t is random, there is no problem of estimability for then we need only to look at
the rank of
!
9 34
,
34 144
and that is 2. Consequently and are both estimable, and of course t is predictable. Let
us estimate t2 and e2 by Method 3 under the assumption V ar(t) = It2 , V ar(e) = Ie2 .
For this we need yy, reduction under the full model, and Red (, ).
y0 y = 601, E(y0 y) = 9 e2 + 9 t2 + q.
Red (full) = 558, E() = 3 e2 + 9 t2 + q.
Red (, ) = 537.257, E() = 2 e2 + 6.6 t2 + q.
t2 = 5.657 or a ratio of 1.27.
q is a quadratic in , . This gives estimates
e2 = 7.167,
Let us use 1 as a prior value of e2 /t2 and estimate t2 by MIVQUE given that e2
= 7.167. We solve for t having added 1 to the diagonal coefficients of equations 2,3,4 of
(21.6). This gives an inverse,

4.24370 1.63866 .08403


.72269 1.02941

.94958
.15126 .10084
.35294

.54622
.30252 .05882

.79832 .29412

.27941

(7)

The solution is (3.02521, .55462, -1.66387, 1.10924, 1.11765). From this t0t = 4.30648.
To find its expectation we compute

.01483 .04449
.02966
0
.13347 .08898
tr(Ct [matrix (21.6)] Ct ) = tr

= .20761,
.05932

which is the coefficient of e2 in E(t0t). Ct is the submatrix composed of rows 2-4 of


(21.7).

.03559 .10677
.07118
0
0

0
.32032 .21354
tr(Ct W Zt Zt WCt ) = tr
= .49827,
.14236
the coefficient of t2 in E(t0t). W0 Zt is the submatrix composed of cols. 2-4 of (21.6).
This gives
t2 = 5.657 or
e2 /
t2 = 1.27. If we do another MIVQUE estimation of t2 , given
e2 = 7.167 using the ratio, 1.27, the same estimate of t2 is obtained. Accordingly we
have REML of t2 , given e2 . Notice also that this is the Method 3 estimate.
If t were actually fixed, but we use a pseudo-variance in the mixed model equations
we obtain biased estimators. Using
e2 /
t2 = 1.27,

+ ti = (3.53, 1.48, 4.04).

+ ti + wi . = (5.78, 5.98, 9.67).


Contrast this last with the corresponding OLS estimates of (6,5,10).

Random Regressions

It is reasonable to assume that regression coefficients are random in some models. For
example, suppose we have a model,
yij = + ci + wij i + eij ,
where yij is a yield observation on the j th day for the ith cow, wij is the day, and i is a
regression coefficient, linear slope of yield on time. Linearity is a reasonable assumption
for a relatively short period following peak production. Further, it is obvious that i is
different from cow to cow, and if cows are random, i is also random. Consequently we
should make use of this assumption. The following example illustrates the method. We
have 4 random cows with 3,5,6,4 observations respectively. The OLS equations are in
(21.8).
18 3 5 6

3 0 0

5 0

4 10
0 10
0 0
0 0
4 0
38

30 19
0 0
30 0
0 19
0 0
0 0
190 0
67

26
0
0
0
26
0
0
0
182

o
co
=
o

90
14
18
26
32
51
117
90
216

(8)

10 = w1. ,
30 = w2. , etc.
X
38 =
w2 , etc.
j ij
51 =

X
j

wij yij , etc.

First let us estimate e2 , c2 , 2 by Method 3. The necessary reductions and their expectations are

y0 y

Red (full)

Red (, t)
Red (, )

18
8
4
5

18
18
18
16.9031

477
477
442.5
477

e2
c2
2

1
1
1
1

18 2 .

The reductions are (538, 524.4485, 498.8, 519.6894). This gives estimates
e2 = 1.3552,
2
2
2
2
2
2
= 2.311, the
e /
c = 2.143 and
e /
= .5863. Using the resulting ratios,

c = .6324,
mixed model solution is
(2.02339, .11180, .36513, .09307,
.34639, .73548, .34970, .76934, .83764).
Covariance models are discussed also in Chapter 16.

Chapter 22
Animal Model, Single Records
C. R. Henderson
1984 - Guelph

We shall describe a number of different genetic models and present methods for
BLUE, BLUP, and estimation of variance and covariance components. The simplest
situation is one in which we have only one trait of concern, we assume an additive genetic
model, and no animal has more than a single record on this trait. The scalar model, that
is, the model for an individual record, is
0

yi = xi + zi u + ai + ei .
represents fixed effects with xi relating the record on the ith animal to this vector.
u represents random effects other than breeding values and zi relates this vector to yi .
ai is the additive genetic value of the ith animal.
ei is a random error associated with the individual record.
The vector representation of the entire set of records is
y = X + Zu + Za a + e.

(1)

If a represents only those animals with records, Za = I. Otherwise it is an identity matrix


with rows deleted that correspond to animals without records.
V ar(u)
V ar(a)
V ar(e)
Cov(u, a0 )
Cov(u, e0 )
Cov(a, e0 )

=
=
=
=
=
=

G.
Aa2 .
R, usually Ie2 .
0,
0,
0.

If Za 6= I, the mixed model equations are


X0 R1 X X0 R1 Z
X0 R1 Za
0 1

0 1
1
Z0 R1 Za
ZR X ZR Z+G

0
0
0
1
1
1
1
2
Za R X Za R Z
Za R Za + A /a

X0 R1 y
o

0 1


u
= Z R y .
0

a
Za R1 y

(2)

If Za = I, (22.2) simplifies to
o
X0 R1 y
X0 R1 X X0 R1 Z
X0 R1

0 1
0 1
0 1
1
0 1
= Z R y .
ZR
u
ZR X ZR Z+G

R1 X
R1 Z
R1 + A1 /a2
a
R1 y

(3)

If R = Ie2 (22.3) simplifies further to


o
X0 y
X0 X
X0 Z
X0

0

Z0
u
= Z0 y .
Z X Z0 Z + G1 e2

X
Z
I + A1 e2 /a2
a
y

(4)

If the number of animals is large, one should, of course, use Hendersons method (1976) for
computing A1 . Because this method requires using a base population of non-inbred,
unrelated animals, some of these probably do not have records. Also we may wish to
evaluate some progeny that have not yet made a record. Both of these circumstances will
will contain predicted breeding values of these animals without
result in Za 6= I, but a
records.

Example With Dam-Daughter Pairs

We illustrate the model above with 5 pairs of dams and daughters, the dams records being
made in period 1 and the daughters in period 2. Ordering the records within periods and
with record 1 being made by the dam of the individual making record 6, etc.
X

1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1

Za = I,
y0 = [5, 4, 3, 2, 6, 6, 7, 3, 5, 4].
!
I5 .5I5
A =
,
.5I5 I5
R = I10 e2 .

The sires and dams are all unrelated. We write the mixed model equations with e2 /a2
assumed to be 5. These equations are

5
0
1
1
1
1
1
0
0
0
0
0

0 1
5 0
0
0
0
0
0
1
1
1
1
1

1
0
23
3

10
3

1 1
0 0
I5

I5

1
0
10
3

23
3

0 0 0 0 0

1 1 1 1 1

I5

I5

p1
p2
=
a

20
25
5
4
3
2
6
6
7
3
5
4

(5)

The inverse of the coefficient matrix is

.24
.02
.04
.04
.04
.04
.04
.02
.02
.02
.02
.02

.02 .04 .04 .04 .04 .04 .02 .02 .02 .02 .02
.24 .02 .02 .02 .02 .02 .04 .04 .04 .04 .04

.02

.02

.02
P
Q

.02

.02

.04

.04

.04
Q
P

.04
.04

(6)

P is a 5 5 matrix with .16867 in diagonals and .00783 in all off-diagonals. Q is a 5 5


matrix with .07594 in diagonals and .00601 in off-diagonals. The solution is (4, 5, .23077,
.13986, -.30070, -.32168, .25175, .23077, .32168, -.39161, -.13986, -.20298).
Let us estimate e2 , a2 by MIVQUE using the prior on e2 /a2 = 5 as we did in
computing BLUP. The quadratics needed are
0 e
and a
0 A1 a
.
e

= y (X Za )
e

= (.76923, .13986, .69930, 1.67832, 1.74825,


.76923, 1.67832, 1.60839, .13986, .97902)0 .
3

0 e
= 13.94689.
e
V ar(
e) = (I WCW0 )(I WCW0 ) e2
+ (I WCW0 )A(I WCW0 ) a2 .
W = (X Za ),
C = matrix of (22.6).
0
) = tr(V ar(
E(
ee
e)) = 5.67265 e2 + 5.20319 a2
0 = Ca W0 y,
a
where Ca = last 10 rows of C.
0
0
V ar(
a) = Ca W0 WCa e2 + Ca W0 AWCa a2 .
) = tr(A1 V ar(
E(
a0 A1 a
a)) = .20813 e2 + .24608 a2 .
0 A1 a
= .53929.
a
a2 = .50.
e2 = 2.00,
a2 we obtain
Using these quadratics and solving for
e2 ,
The same estimates are obtained for any prior used for e2 /a2 . This is a consequence
of the fact that we have a balanced design. Therefore the estimates are truly BQUE and
also are REML. Further the traditional method, daughter-dam regression, gives the same
estimates. These are

a2 = 2 times regression of daughter on dam.

e2 = within period mean square


a2 .
For unbalanced data MIVQUE is not invariant to the prior used, and daughter-dam
regression is neither MIVQUE nor REML. We illustrate by assuming that y10 was not
observed. With e2 /a2 assumed equal to 2 we obtain
0 e
= 11.99524 with expectation = 3.37891 e2 + 2.90355 a2 .
e
0 A1 a0 = 2.79712 with expectation = .758125 e2 + .791316 a2 .
a
This gives

a2 = .75619,

e2 = 2.90022.
When e2 /a2 is assumed equal to 5, the results are
0 e
= 16.83398 with expectation 4.9865 e2 + 4.6311 a2 ,
e
0 A1 a
0 = .66973 with expectation .191075 e2 + .214215 a2 .
a
Then

a2 = .67132,

e2 = 2.7524.

Chapter 23
Sire Model, Single Records
C. R. Henderson
1984 - Guelph

A simple sire model is one in which sires, possibly related, are mated to a random
sample of unrelated dams, no dam has more than one progeny with a record, and each
progeny produces one record. A scalar model for this is
0

yij = xij + si + zi u + eij .

(1)

represents fixed effects with xij relating the j th progeny of the ith sire to these effects.
si represents the sire effect on the progeny record.
u represents other random factors with zij relating these to the ij th progeny record.
eij is a random error.
The vector representation is
y = X + Zs s + Zu + e.

(2)

V ar(s) = As2 , where A is the numerator relationship of the sires, and s2 is the sire
variance in the base population. If the sires comprise a random sample from this
population s2 = 41 additive genetic variance. Some columns of Zs will be null if s contains
sires with no progeny, as will usually be the case if the simple method for computation of
A1 requiring base population animals, is used.
V ar(u) = G, Cov(s, u0 ) = 0.
V ar(e) = R, usually = Ie2 .
Cov(s, e0 ) = 0, Cov(u, e0 ) = 0.
If sires and dams are truly random,
Ie2 = .75I (additive genetic variance)
+ I (environmental variance).

With this model the mixed model equations are


X0 R1 X X0 R1 Zs
X0 R1 Z
o
0
0

0 1
s

Zs R X Zs R1 Zs + A1 s2 Zs R1 Z
=
0 1
0 1
0 1
1

u
Z R X Z R Zs
ZR Z+G

X0 R1 y
0 1
Zs R y .
Z0 R1 y

(3)

If R = Ie2 , (23.3) simplifies to (23.4)


X0 y
X0 X X 0 Z s
X0 Z
o
0
0
0

0
2
1 2
s = Zs y .

Zs X Zs Zs + A e /s Zs Z
0
0
0
2 1

u
Z0 y
Z X Z Zs
Z Z + e G

(4)

We illustrate this model with the following data.

Sires
1
2
3

nij
Herds
2 3
5 0
8 4
2 6

1
3
0
4

yij.
Herds
4 1 2 3
0 25 34
0 74 31
8 23 11 43

73

The model assumed is


yijk = si + hj + eijk.
V ar(s) = Ae2 /12,
V ar(e) = Ie2 .

1.0

.5 .5
1.0 .25
.
1.0

A =

h is fixed.
The ordinary LS equations are

0
12

0 3
0 0
20 4
7

5
8
2
0
15

0
4
6
0
0
10

0
0
8
0
0
0
8

59
105
150
48
119
74
73

(5)

The mixed model equations are

28 8 8 3

28
0 0

36
4

5
8
2
0
15

0
4
6
0
0
10

0
0
8
0
0
0
8

59
105
150
48
119
74
73

(6)

The inverse of the matrix of (23.6) is

.0764
.0436
.0432
.0574
.0545
.0434
.0432

.0436
.0712
.0320
.0370
.0568
.0477
.0320

.0432 .0574 .0545 .0434 .0432

.0320 .0370 .0568 .0477 .0320

.0714 .0593 .0410 .0556 .0714

.0593
.2014
.0468
.0504
.0593
.

.0410
.0468
.1206
.0473
.0410

.0556
.0504
.0473
.1524
.0556

.0714
.0593
.0410
.0556
.1964

(7)

The solution is
s0 = (.036661, .453353, .435022),
0 = (7.121439, 7.761769, 7.479672, 9.560022).
h
Let us estimate e2 from the residual mean square using OLS reduction, and e2 by
MIVQUE type computations. A solution to the OLS equations is
[10.14097, 11.51238, 9.12500, 2.70328, 2.80359, 2.67995, 0]
This gives a reduction in SS of 2514.166.
y0 y = 2922.
Then
e2 = (2922 - 2514.166)/(40-6) = 11.995. MIVQUE requires computation of s0 A1s
and equating to its expectation.

A1

5 2 2
1
4
0
=
2
.
3
2
0
4

s0 A1s = .529500.
Var (RHS of mixed model equations) = [Matrix (23.5)] e2 +

8 0 0

0 12 0

0 0 20
8 0 0 3 5 0 0

2
3 0 4
0 0 8 4 0
A 0 12
s .

5 8 2
0 0 20 4 2 6 8

0 4 6
0 0 8
3

The second term of this is

64

48
144

80 40 80 40 32

60 30 132 66 24

400 110 130 140 160

37 56 43 44
s2 .

151 83
5

64 56

64

(8)

V ar(s) = Cs [matrix (23.8)] Cs s2 + Cs [matrix (23.5)] Cs e2 ,


where Cs = first 3 rows of (23.7).

.005492 .001451 .001295

2
.007677 .006952
V ar(s) =
e
.007599

.017338 .005622 .003047


2
.053481 .050670
+

s .
.052193

(9)

Then E(s0 A1s) = tr(A1 [matrix (23.9)]) = .033184 e2 + .181355 s2 . With these results
e2 as 11.995. This is an approximate
we solve for
s2 and this is .7249 using estimated
MIVQUE solution because
e2 was computed from the residual of ordinary least squares
reduction rather than by MIVQUE.

Chapter 24
Animal Model, Repeated Records
C. R. Henderson
1984 - Guelph

In this chapter we deal with a one trait, repeated records model that has been extensively used in animal breeding, and particularly in lactation studies with dairy cattle.
The assumptions of this model are not entirely realistic, but may be an adequate approximation. The scalar model is
0

yij = xij + zij u + ci + eij .

(1)

represents fixed effects, and xij relates the j th record of the ith animal to elements of
.
0

u represents other random effects, and zij relates the record to them.
ci is a cow effect. It represents both genetic merit for production and permanent
environmental effects.
eij is a random error associated with the individual record.
The vector representation is
y = X + Zu + Zc c + e.

(2)

V ar(u) = G,
V ar(c) = I c2 if cows are unrelated, with c2 = a2 + p2
= A a2 + I p2 if cows are related,
where p2 is the variance of permanent environmental effects, and if there are non-additive
genetic effects, it also includes their variances. In that case I p2 is only approximate.
V ar(e) = I e2 .
Cov(u, a0 ), Cov(u, e0 ), and Cov(a, e0 ) are all null. For the related cow model let
Zc c = Zc a + Zc p.

(3)

It is advantageous to use this latter model in setting up the mixed model equations, for
then the simple method for computing A1 can be used. There appears to be no simple
method for computing directly the inverse of V ar(c).

X0 X
X0 Z
X0 Zc
0
0
2 1
Z0 Zc
Z X Z Z + e G
2
0
0
0
Zc Zc + A1 e2
Zc Z
Zc X
0

Zc X

Zc Z

X0 Zc
Z0 Zc
0
Zc Zc

Zc Zc + I e2

Zc Zc

X0 y
Z0 y
0
Zc y
0
Zc y

(4)

These equations are easy to write provided G1 is easy to compute, G being diagonal, e.g.
0
as is usually the case. A1 can be computed by the easy method. Further Zc Zc + Ie2 /p2
can be absorbed easily. In fact, one would not need to write the p

is diagonal, so p
0
2 1
equations. See Henderson (1975b). Also Z Z+e G is sometimes diagonal and therefore
can be absorbed easily. If predictions of breeding values are of primary interest, a
is
u
what is wanted. If, in addition, predictions of real producing abilities are wanted, one
. Note that by subtracting the 4th equation of (24.4) from the 3rd we obtain
needs p
A1

e2 /a2

I
a

e2 /p2

= 0.
p

Consequently
=
p

p2 /a2

,
A1 a

(5)

(6)

and predictions of real producing abilities are




I+

p2 /a2

A1

.
a

Note that under the model used in this chapter


V ar(yij ) = V ar(yik ), j 6= k.
Cov(yij , yik ) is identical for all pairs of j 6= k. This is not necessarily a realistic model. If
we wish a more general model, probably the most logical and easiest one to analyze is that
which treats different lactations as separate traits, the methods for which are described
in Chapter 26.
We illustrate the simple repeatability model with the following example. Four animals
produced records as follows in treatments 1,2,3. The model is
yij = ti + aj + pj + eij .
Animals
Treatment 1 2 3 4
1
5 3 - 4
2
6 5 7 3
8 - 9 2

The relationship matrix of the 4 animals is


1 .5 .5 .5

1 .25 .125

1
.5
1

V ar(a) = .25 Ay2 ,


V ar(p) = .2 Iy2 ,
Ie2 = .55 Iy2 .
These values correspond to h2 = .25 and r = .45, where r denotes repeatability. The OLS
equations are

3 0 0 1

3 0 1

2 1

1
1
0
0
2

0
1
1
0
0
2

1
0
0
0
0
0
1

1
1
1
3
0
0
0
3

1
1
0
0
2
0
0
0
2

0
1
1
0
0
2
0
0
0
2

1
0
0
0
0
0
1
0
0
0
1

t
a
=
p

12
18
17
19
8
16
4
19
8
16
4

(7)

Note that the last 4 equations are identical to equations 4-7. Thus a and p are confounded
in a fixed model. Now we add 2.2 A1 to the 4-7 diagonal block of coefficients and 2.75
I to the 8-11 diagonal block of coefficients. The resulting coefficient matrix is in (24.8).
2.2 = .55/.25, and 2.75 = .55/.2.

3.0

0
3.0

0
0
2.0

1.0
1.0
0
1.0
1.0
1.0
1.0
0
1.0
0
1.0
0
7.2581 1.7032 .9935 1.4194
5.0280 .1892
.5677
5.3118 1.1355
4.4065

1.0
1.0
1.0
3.0
0
0
0
5.75

1.0
1.0
0
0
2.0
0
0
0
4.75

0
1.0
1.0
0
0
2.0
0
0
0
4.75

1.0
0
0
0
0
0
1.0
0
0
0
3.75

(8)

The inverse of (24.8) (times 1000) is


693 325 313 280 231 217 247 85 117 43 119

709 384 288 246 266 195 96 114 118 34

943
306
205
303
215
126
60
152
26

414
227
236
225 64
24
26
15

390
153
107
0
64
31
33

410
211
14
37 53
2

406
3
48
2 42

261
38
41
24

286
21
18

290
12
310

(9)

The solution is
t0 = (4.123 5.952 8.133),
0 = (.065, .263, .280, .113),
a
0 = (.104, .326, .285, .063).
p
We next estimate e2 , a2 , p2 , by MIVQUE with the priors that were used in the above
0
mixed model solution. The Zc W submatrix for both a and p is

1
1
0
1

1
1
1
0

1
0
1
0

3
0
0
0

0
2
0
0

0
0
2
0

0
0
0
1

3
0
0
0

0
2
0
0

0
0
2
0

0
0
0
1

(10)

The variance of the right hand sides of the mixed model equations contains
0
W0 Zc AZc W a2 , where W = (X Z Zc Zc ). The matrix of coefficients of a2 is in (24.11).
0
V ar(r) also contains W0 Zc Zc W p2 and this matrix is in (24.12). The coefficients of e2
are in (24.7).

5.25 4.88 3.25 6.0

5.5 3.75 6.0

3.0 4.5

9.0

3.25
3.5
1.5
3.0
4.0

2.5
3.5
3.0
3.0
1.0
4.0

1.65
1.13
1.0
1.5
.25
1.0
1.0

6.0
6.0
4.5
9.0
3.0
3.0
1.5
9.0

3.25
3.5
1.5
3.0
4.0
1.0
.25
3.0
4.0

2.5
3.5
3.0
3.0
1.0
4.0
1.0
3.0
1.0
4.0

1.63
1.13
1.0
1.5
.25
1.0
1.0
1.5
.25
1.0
1.0

(11)

3 2 1 3

3 2 3

2 3

2
2
0
0
4

0
2
2
0
0
4

1
0
0
0
0
0
1

3
3
3
9
0
0
0
9

2
2
0
0
4
0
0
0
4

0
2
2
0
0
4
0
0
0
4

1
0
0
0
0
0
1
0
0
0
1

(12)

Now V ar(
a) contains Ca (V ar(r))C0a a2 , where Ca is the matrix formed by rows 4-9 of the
0
matrix in (24.9). Then Ca (V ar(r))Ca is
.0168 .0012 .0061
.0012

.0423 .0266 .0323

.0236
.0160
.0274

.0421 .0019 .0099


.0050

.0460 .0298 .0342


+

.0331
.0136
.0310

a2

(13)

p2

(14)

.0172 .0001 .0022 .0004

.0289 .0161 .0234


2
+

.0219
.0042 e
.0252

(15)

0 A1 a
0 = .2067. The expectation of this is
We need a
trA1 [matrix (24.13) + matrix (24.14) + matrix (24.15)]
= .1336 e2 + .1423 a2 + .2216 p2 .
To find V ar(
p) we use Cp , the last 6 rows of (24.9).
.0429 .0135 .0223 .0071

.0455 .0154 .0166

V ar(
p) =

.0337
.0040
.0197

.1078 .0423 .0466 .0189

.0625 .0106 .0096


+

.0586 .0014
.0298

a2

(16)

p2

(17)

.0441 .0167 .0139 .0135

.0342 .0101 .0074

.0374 .0133
.0341

e2 .

(18)

0p
= .2024 with expectation
We need p
tr[matrix (24.16) + matrix (24.17) + matrix (24.18)]
= .1498 e2 + .1419 a2 + .2588 p2 .
0 e
.
We need e
= [I WCW0 ]y,
e
where C = matrix (24.9), and I WCW0 is

.4911 .2690 .2221 .1217


.1183
.0034 .0626
.0626

.4548 .1858
.1113 .1649
.0536
.0289 .0289

.4079
.0104
.0466
.0570
.0337
.0337

.5122 .2548 .2574 .1152


.1152

.4620
.2073
.0238
.0238

.4647
.1390 .1390

.3729 .3729

.3729
Then

e
e

e
V ar(
e)
V ar(y)
0

=
=
=
=

[.7078, .5341, .1736, .1205, .3624, .4829, .3017, .3017].


1.3774.
(I WCW0 ) V ar(y) (I WCW0 ),
0
0
Zc AZc a2 + Zc Zc p2 + I e2 .

1 .5 .5
1. .5

1. .125 .5 1.

1.
.5 .125

1. .5
=

1.

.5
.25
.5
.5
.25
1.

1 0 0 1

1 0 0

1 0

0
1
0
0
1

0
0
0
0
0
1

1
0
0
1
0
0
1

0
0
0
0
0
1
0
1
6

1.
.5
.5
1.
.5
.5
1.

.5
.25
.5
.5
.25
1.
.5
1.

a2

p2 + Ie2 .

(19)

Then the diagonals of V ar(


e) are
(.0651, .1047, .1491, .0493, .0918, .0916, .0475, .0475) a2
+ (.1705, .1358, .2257, .1167, .1498, .1462, .0940, .0940) p2
+ [diagonals of (24.19)] e2 .
) is the sum of these diagonals
Then E(
e0 e
= .6465 a2 + 1.1327 p2 + 3.5385 e2 .

Chapter 25
Sire Model, Repeated Records
C. R. Henderson
1984 - Guelph

This chapter is a combination of those of Chapters 23 and 24. That is, we are
concerned with progeny testing of sires, but some progeny have more than one record.
The scalar model is
yijk = x0ijk + z0ijk u + si + pij + eijk .
u represents random factors other than s and p. It is assumed that all dams are unrelated
and all progeny are non-inbred. Under an additive genetic model the covariance between
any record on one progeny and any record on another progeny of the same sire is s2 =
1
h2 y2 if sires are a random sample from the population. The covariance between
4
any pair of records on the same progeny is s2 + p2 = ry2 . If sires are unselected,
p2 = (r 41 h2 )y2 , e2 = (1 r)y2 , s2 = 41 h2 y2 .
In vector notation the model is
y = X + Zu + Zs s + Zp p + e.
V ar(s) = A s2 , V ar(p) = I p2 , V ar(e) = I e2 .
With field data one might eliminate progeny that do not have a first record in order to
reduce bias due to culling, which is usually more intense on first than on later records.
Further, if a cow changes herds, the records only in the first herd might be used. In this
case useful computing strategies can be employed. The data can be entered by herds, and
0
p easily absorbed because Zp Zp + Ie2 /p2 is diagonal. Once this has been done, fixed
effects pertinent to that particular herd can be absorbed. These methods are described in
detail in Ufford et al. (1979). They are illustrated also in a simple example which follows.
We have a model in which the fixed effects are herd-years. The observations are
displayed in the following table.

Sires Progeny 11
1
1
5
2
5
3
4
5
6
7
2
8
7
9
10
11
12
13
14

Herd - years
12 13 21 22 23
6 4
8
9 4
5 6 7
4 5
4 3
2
6
5 4
9
4
3 7 6
5 6
5

24

3
8

8
4

We assume e2 /s2 = 8.8, e2 /p2 = 1.41935. These correspond to unselected sires,


h2 = .25, r = .45. Further, we assume that A for the 2 sires is
1 .25
.25 1

Ordering the solution vector hy, s, p the matrix of coefficients of OLS equations is in
(25.1), and the right hand side vector is (17, 43, 16, 12, 27, 29, 23, 88, 79, 15, 13, 13, 21,
9, 7, 10, 13, 9, 9, 4, 16, 19, 9)0 .
X0 X = diag (3, 6, 4, 3, 5, 6, 4)
0
Zs Zs = diag (17, 14)
0
Zp Zp = diag (3, 2, 2, 4, 2, 2, 2, 2, 2, 1, 1, 3, 3, 2)
0

Zs X =
0

Zs Zp =

2 3 2 2 3 3 2
1 3 2 1 2 3 2

3 2 2 4 2 2 2 0 0 0 0 0 0 0
0 0 0 0 0 0 0 2 2 1 1 3 3 2

Zp X =

1
1
0
0
0
0
0
1
0
0
0
0
0
0

1
1
1
0
0
0
0
1
1
1
0
0
0
0

1
0
1
0
0
0
0
0
1
0
1
0
0
0

0
0
0
1
1
0
0
0
0
0
0
1
0
0

0
0
0
1
1
1
0
0
0
0
0
1
1
0

0
0
0
1
0
1
1
0
0
0
0
1
1
1

0
0
0
1
0
0
1
0
0
0
0
0
1
1

(1)

Modifying these by adding 8.8 A1 and 1.41935 I to appropriate submatrices of the


coefficient matrix, the BLUP solution is
c = [5.83397, 7.14937, 4.18706, 3.76589, 5.29825, 4.83644, 5.64274]0 ,
hy
s = [.06397, .06397]0 ,
= [.44769, .04229, .52394, .31601, .01866, .87933, .10272,
p
.03255, .72072, .73848, .10376, .43162, .68577, .47001]0 .

If one absorbs p in the mixed model equations, we obtain


2.189 .811 .226

4.191 .811

2.775

0
0
0
0
0
0
0
0
0
0
0
0
2.297 .703 .411 .184
3.778 .930 .411
4.486 .996
3.004

c
hy
s

6.002
21.848
4.518
1.872
10.526
9.602
9.269
31.902
31.735
3

.736
.415
1.151
1.417
.736
1.002
.677
.321
1.092
.642
1.092
1.057
.677
.736
15.549 2.347
14.978

c and
The solution for hy
s are the same as before.
c can be absorbed
If one chooses, and this would be mandatory in large sets of data, hy
c are in block diagonal form. When hy
c is
herd by herd. Note that the coefficients of hy
absorbed, the equations obtained are

12.26353 5.222353
5.22353
12.26353

s1
s2

1.1870
1.1870

The solution is approximately the same for sires as before, the approximation being due
to rounding errors.

Chapter 26
Animal Model, Multiple Traits
C. R. Henderson
1984 - Guelph

No Missing Data

In this chapter we deal with the same model as in Chapter 22 except now there are 2 or
more traits. First we shall discuss the simple situation in which every trait is observed
on every animal. There are n animals and t traits. Therefore the record vector has nt
elements, which we denote by
y0 = [y10 y20 . . . yt0 ].
y10 is the vector of n records on trait 1, etc. Let the model be

y1
y2
..
.

X1
0 ... 0
0 X2 . . . 0
..
..
..
.
.
.
0
0 . . . Xt

yt

I
0
..
.

0 ...
I ...
..
.

0
0
..
.

0 0 ... I

1
2
..
.

t
a1
a2
..
.

e1
e2
..
.

(1)

et

at

Accordingly the model for records on the first trait is


y1 = X1 1 + a1 + e1 , etc.

(2)

Every Xi has n rows and pi columns, the latter corresponding to i with pi elements.
Every I has order n x n, and every ei has n elements.

V ar

a1
a2
..
.
at

Ag11 Ag12 . . . Ag1t


Ag12 Ag22 . . . Ag2t
..
..
..
.
.
.
Ag1t Ag2t . . . Agtt

= G.

(3)

gij represents the elements of the additive genetic variance-covariance matrix in a noninbred population.

V ar

e1
e2
..
.

et

Ir11 Ir12 . . . Ir1t


Ir12 Ir22 . . . Ir2t
..
..
..
.
.
.
Ir1t Ir2t . . . Irtt

= R.

(4)

rij represents the elements of the environmental variance-covariance matrix. Then


A1 g 11 . . . A1 g 1t

..
..
.
=
.
.

A1 g 1t . . . A1 g tt

G1

(5)

g ij are the elements of the inverse of the additive genetic variance covariance matrix.
Ir11 . . . Ir1t
.
..
=
.
.
..
Ir1t . . . Irtt

R1

(6)

rij are the elements of the inverse of the environmental variance-covariance matrix. Now
the GLS equations regarding a fixed are
X0 1 X1 r11 . . .
.
.
.

X0 1 Xt r1t
..
.

X0 t X1 r1t
X1 r11
..
.

. . . X0 t Xt rtt
. . . Xt r1t
..
.

X0 t r1t
Ir11
..
.

X1 r1t

. . . Xt rtt

Ir1t

X0 1 y1 r11 + . . . +
..
.

X0 1 r1t
o1
.
..
.
.
.
o

. . . X0 t rtt
t
1t

1
. . . Ir

..
..
.
.

X0 1 r11 . . .
..
.

X0 1 yt r1t

..

X0 t yt rtt

yt r1t

..

X0 t y1 r1t
y1 r11
..
.

+ ... +
+ ... +

y1 r1t

+ . . . + yt rtt

. . . Irtt

t
a

(7)

The mixed model equations are formed by adding (26.5) to the lower t2 blocks of (26.7).
If we wish to estimate the gij and rij by MIVQUE we take prior values of gij and rij
and e
needed for
for the mixed model equations and solve. We find that quadratics in a
MIVQUE are
0i A1 a
j for i = l, . . . , t; j = i, . . . , t.
a
(8)
0i e
j for i = l, . . . , t; j = i, . . . , t.
e
2

(9)

To obtain the expectations of (26.8) we first compute the variance-covariance matrix of


the right hand sides of (26.7). This will consist of t(t 1)/2 matrices each of the same
order as the matrix of (26.7) multiplied by an element of gij . It will also consist of the
same number of matrices with the same order multiplied by an element of rij . The matrix
for gkk is
X0 1 AX1 r1k r1k . . .
.
.
.

X0 1 AXt r1k rtk


..
.

X0 1 Ar1k r1k . . .
..
.

X0 1 Ar1k rtk

..

X0 t AX1 rtk r1k


AX1 r1k r1k
..
.

. . . X0 t AXt rtk rtk


. . . AXt r1k rtk
..
.

X0 t Artk r1k
Ar1k r1k
..
.

. . . X0 t Artk rtk
. . . Ar1k rtk
..
.

AX1 rtk r1k

. . . AXt rtk rtk

Artk r1k

. . . Artk rtk

(10)

The ij th sub-block of the upper left set of t t blocks is X0 i AXj rik rjk . The sub-block of
the upper right set of t t blocks is X0 i Arik rjk . The sub-block of the lower right set of
t t blocks is Arik rjk .
The matrix for gkm is
P T
T0 S

(11)

where
2X0 1 AX1 r1k r1m
...
.
P =
..
X0 t AX1 (r1k rtm + r1m rtk ) . . .

...
2X0 1 Ar1k r1m
.
T =
..
X0 t A(r1k rtm + r1m rtk ) . . .

X0 1 AXt (r1k rtm + r1m rtk )

..
,
.

0
tk tm
2X t AXt r r

X0 1 A(r1k rtm + r1m rtk )

..
,
.

0
tk tm
2X t Ar r

and
2Ar1k r1m
...
.
S =
..
A(r1k rtm + r1m rtk ) . . .

A(r1k rtm + r1m rtk )

..
.
.

2Artk rtm

The ij th sub-block of the upper left set of t t blocks is


X0 i AXj (rik rjm + rim rjk ).

(12)

The ij th sub-block of the upper right set is


X0 i A(rik rjm + rim rjk ).

(13)

The ij th sub-block of the lower right set is


A(rik rjm + rim rjk ).
3

(14)

The matrix for rkk is the same as (26.10) except that I replaces A. Thus the 3 types
of sub-blocks are X0 i Xj rik rjk , X0 i rik rjk , and Irik rjk . The matrix for rkm is the same as
(26.11) except that I replaces A. Thus the 3 types of blocks are X0 i Xj (rik rjm + rim rjk ),
X0 i (rik rjm + rim rjk ), and I(rik rjm + rim rjk ).
Now define the p + 1, . . . , p + n rows of a g-inverse of mixed model coefficient matrix
as C1 , the next n rows as C2 , etc., with the last n rows being Ct . Then
V ar(
ai ) = Ci [V ar(r)]C0i ,

(15)

where V ar(r) = variance of right hand sides expressed as matrices multiplied by the gij
and rij as described above.
0j ) = Ci [V ar(r)]C0j .
Cov(
ai , a

(16)

0i ) = trA1 V ar(
E(
ai A1 a
ai ).

(17)

0j ) = trA1 Cov(
0j ).
E(
ai A1 a
ai , a

(18)

Then

To find the quadratics of (26.9) and their expectations we first compute


1 ,
I WCW0 R

(19)

where W = (X Z) and C = g-inverse of mixed model coefficient matrix. Then


1 )y.
= (I WCW0 R
e

(20)

Let the first n rows of (26.19) be denoted B1 , the next n rows B2 , etc. Also let
Bi (Bi1 Bi2 . . . Bit ).

(21)

1 is symmetric and
Each Bij has dimension n n and is symmetric. Also I WCW0 R
as a consequence Bij = Bji . Use can be made of these facts to reduce computing labor.
Now
i = Bi y (i = 1, . . . , t).
e
V ar(
ei ) = Bi [V ar(y)]B0i .
j ) = Bi [V ar(y)]B0j .
Cov(
ei , e

(22)
(23)
(24)

By virtue of the form of V ar(y),


V ar(
ei ) =

t
X

B2ik rkk +

k=1

t
X

t1
X

t
X

2Bik Bim rkm

k=1 m=k+1
t1
X

Bik ABik gkk +

k=1

t
X

k=1 m=k+1

2Bik ABim gkm .

(25)

j ) =
Cov(
ei , e

t
X

Bik Bjk rkk

k=1

+
+
+

t1
X

t
X

(Bik Bjm Bim Bjk )rkm

k=1 m=k+1
t
X

Bik ABjk gkk

k=1
t1
X

t
X

(Bik ABjm + Bim ABjk )gkm .

(26)

k=1 m=k+1

i ) = trV ar(
E(
e0i e
ei ).
0
j ) = trCov(
0j ).
E(
ei e
ei , e

(27)
(28)

Note that only the diagonals of the matrices of (26.25) and (26.26) are needed.

Missing Data

When data are missing on some traits of some of the animals, the computations are more
difficult. An attempt is made in this section to present algorithms that are efficient for
computing, including strategies for minimizing data storage requirements. Henderson and
Quaas (1976) discuss BLUP techniques for this situation.
The computations for the missing data problem are more easily described and carried
out if we order the records, traits within animals. It also is convenient to include missing
data as a dummy value = 0. Then y has nt elements as follows:
y0 = (y10 y20 . . . yn0 ),
where yi is the vector of records on the t traits for the ith animal. With no missing data

the model for the nt records is

y11
y12
..
.
y1t
y21
y22
..
.
y2t
..
.
yn1
yn2
..
.
ynt

x011 0
0 x012
..
..
.
.
0
0
x021 0
0 x022
..
..
.
.
0
..
.

0
..
.

...
...
...
...
...
...

0
0
..
.

x01t

..

0
x2t
..

..

x0n1 0 . . .
0 x0n2 . . .
..
..
.
.
0
0 . . . x0nt

..
+
.

a11
a12
..
.
a1t
a21
a22
..
.
a2t
..
.
an1
an2
..
.
ant

e11
e12
..
.
e1t
e21
e22
..
.
e2t
..
.
en1
en2
..
.

ent

x0ij is a row vector relating the record on the j th trait of the ith animal to j , the fixed
P
effects for the j th trait. j has pj elements and j pj = p. When a record is missing,
it is set to 0 and so are the elements of the model for that record. Thus, whether data
are missing or not, the incidence matrix has dimension, nt by (p + nt). Now R has block
diagonal form as follows.

R1 0
... 0

R2 . . . 0
0

R = ..
(29)
..
..
.
.
.
.
0

. . . Rn

For an animal with no missing data, Ri is the t t environmental covariance matrix. For
an animal with missing data the rows (and columns) of Ri pertaining to missing data are
set to zero. Then in place of R1 ordinarily used in the mixed model equations, we use
R which is

R
0
1

.
(30)
...

R
n

R
1 is the zeroed type of g-inverse described in Section 3.3. It should be noted that Ri is
the same for every animal that has the same missing data. There are at most t2 1 such
unique matrices, and in the case of sequential culling only t such matrices corresponding
to trait 1 only, traits 1 and 2 only, . . . , all traits. Thus we do not need to store R and
R but only the unique types of R
i .

V ar(a) has a simple form, which is

V ar(a) =

a11 G0 a12 G0 . . . a1n G0


a12 G0 a22 G0 . . . a2n G0

,
..
..
..

.
.
.
a1n G0 a2n G0 . . . ann G0

(31)

where G0 is the t t covariance matrix of additive effects in an unselected non-inbred


population. Then

[V ar(a)]1 =

a11 G1
a12 G1
. . . a1n G1
0
0
0
22 1
2n 1
a12 G1
a
G
.
.
.
a
G
0
0
0
..
..
..
.
.
.
2n 1
nn 1
a1n G1
a
G
.
.
.
a
G
0
0
0

(32)

aij are the elements of the inverse of A. Note that all nt of the aij are included in the
mixed model equations even though there are missing data.
We illustrate prediction by the following example that includes 4 animals and 3 traits
with the j vector having 2, 1, 2 elements respectively.

Animal
1
2
3
4

1
5
2
2

Trait
2 3
3 6
5 7
3 4
- -

1 2

X for 1 = 1 3
,
1 4

for 2 =
1 ,
1

1 3

and for 3 = 1 4 .
1 2

Then, with missing records included, the incidence matrix is

1
0
0
1
0
0
0
0
0
1
0
0

2
0
0
3
0
0
0
0
0
4
0
0

0
1
0
0
1
0
0
1
0
0
0
0

0
0
1
0
0
1
0
0
1
0
0
0

0
0
3
0
0
4
0
0
2
0
0
0

1
0
0
0
0
0
0
0
0
0
0
0

0
1
0
0
0
0
0
0
0
0
0
0

0
0
1
0
0
0
0
0
0
0
0
0

0
0
0
1
0
0
0
0
0
0
0
0

0
0
0
0
1
0
0
0
0
0
0
0

0
0
0
0
0
1
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
1
0
0
0
0

0
0
0
0
0
0
0
0
1
0
0
0

We assume that the environmental covariance matrix is

5 3 1

6 4

.
7
Then R for animals 1 and 2 is

.3059 .2000
.0706

.4000 .2000

,
.2471
R for animal 3 is

and for animal 4 is

0
0
.2692 .1538
,
.2308

.2 0 0

0 0

.
0
Suppose that

1.

A=
and

0 .5
0
1. .5 .5
1. .25
1.

2 1 1

3 2
G0 =
.
4
8

0
0
0
0
0
0
0
0
0
1
0
0

0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0

(33)

V ar(
a) is

2.0 1.0 1.0

3.0 2.0

4.0

0
0
0 1.0 .5 .5
0
0
0
0
0
0 .5 1.5 1.0
0
0
0
0
0
0 .5 1.0 2.0
0
0
0
2.0 1.0 1.0 1.0 .5 .5 1.0 .5 .5
3.0 2.0 .5 1.5 1.0 .5 1.5 1.0
4.0 .5 1.0 2.0 .5 1.0 2.0
2.0 1.0 1.0 .5 .25 .25
3.0 2.0 .25 .75 .5
4.0 .25 .5 1.0
2.0 1.0 1.0
3.0 2.0
4.0

(34)

Using the incidence matrix, R , G1 , and y we get the coefficient matrix of mixed
model equations in (26.35) . . . (26.37). The right hand side vector is (1.8588, 4.6235, .6077, 2.5674, 8.1113, 1.3529, -1.0000, 1.2353, .1059, .2000, .8706, 0, .1923, .4615, .4000,
0, 0)0 . The solution vector is (8.2451, -1.7723, 3.9145, 3.4054, .8066, .1301, -.4723, .0154,
-.2817, .3965, -.0911, -.1459, -.2132, -.2480, .0865, .3119, .0681).
Upper left 8 x 8 (times 1000)

812 2329 400


141
494
306 200
71

7176 1000
353
1271
612 400
141

1069
554
1708
200
400
200

725
2191
71 200
247

7100
212 600
741

1229 431 045

1208 546

824

(35)

Upper right 8 x 9 and (lower left 9 8) (times 1000)

306
918
200
71
282
308
77
38

200
71
0
0
0 200 0 0
600
212
0
0
0 800 0 0

400 200
0
269 154
0 0 0

200
247
0 154
231
0 0 0

.
800
988
0 308
462
0 0 0

77 38 615
154
77
0 0 0

269 115
154 538
231
0 0 0
115
192
77
231 385
0 0 0

(36)

Lower right 9 9 (times 1000)


1434 482 70 615
154
77 410
103
51

1387 623
154 538
231
103 359
154

952
77
231
385
51
154
256

1231 308 154


0
0
0

1346 615
0
0
0
.

1000
0
0
0

1020 205 103

718 308
513

(37)

EM Algorithm

In spite of its possible slow convergence I tend to favor the EM algorithm for REML
to estimate variances and covariances. The reason for this preference is its simplicity
as compared to iterated MIVQUE and, above all, because the solution remains in the
parameter space at each round of iteration.
i = BLUP for the n breeding values
If the data were stored animals in traits and a
th
on the i trait, gij would be estimated by iterating on
j + trA1 Cij )/n,
gij = (
ai A1 a

(38)

where Cij is the submatrix pertaining to Cov(


ai ai , a
0j a0j ) in a g-inverse of the mixed
model coefficient matrix. These same computations can be effected from the solution with
ordering traits in animals. The following FORTRAN routine accomplishes this.

10

6
7

8
9

REAL *8 A( ), C( ), U( ), S
INTEGER T
.
.
.
NT=N*T
DO 7 I=1, T
DO 7 J=I, T
S=0. DO
DO 6 K=1, N
DO 6 L=1, N
S=S+A(IHMSSF(K,L,N))*U(T*K-T+I)*U(T*L-T+J)
Store S
.
.
.
DO 9 I=1, T
DO 9 J=I, T
S=0. DO
DO 8 K=1, N
DO 8 L=1, N
S=S+A(IHMSSF(K,L,N))*C(IHMSSF(T*K-T+I,T*L-T+J,NT))
Store S

A is a one dimensional array with N(N+1)/2 elements containing A1 . C is a one


dimensional array with NT(NT+1)/2 elements containing the lower (NT)2 submatrix of
a g-inverse of the coefficient matrix. This also is half-stored. U is the solution vector for
a0 . IHMSSF is a half-stored matrix subscripting function. The t(t + 1)/2 values of S in
i . The values of S in statement 9 are the values of
0i A1 a
statement 7 are the values of a
1
trA Cij .
In our example these are the following for the first round. (.1750, -.1141, .0916,
0i A1 aj , and (7.4733, 3.5484, 3.4788, 10.5101, 7.0264, 14.4243)
.4589, .0475, .1141), for a
for trA1 Cij . This gives us as the first estimate of G0 the following,

1.912

.859 .893
2.742 1.768
.
3.635

remain the same for all rounds of iteration,


Note that the matrix of the quadratics in a
1
that is, A .
change with each round of iteration. However, they
In constrast, the quadratics in e

11

have a simple form since they are all of the type,


n
X

0i Qi e
i ,
e

i=1

i is the vector of BLUP of errors for the t traits in the ith animal. The e
are
where e
computed as follows
ij
(39)
eij = yij x0ij oj a
when yij is observed. eij is set to 0 for yij = 0. This is not BLUP, but suffices for
subsequent computations. At each round we iterate on
+ trQij WCW0 ) i = 1, . . . , tij , j = i, . . . , t.
trQij R = (
e0 Qij e

(40)

This gives at each round a set of equations of the form


T
r = q,

(41)

where T is a symmetric t t matrix, r = (r11 r12 . . . rtt )0 , and q is a t 1 vector of


numbers. Advantage can be taken of the symmetry of T, so that only t(t+1)/2 coefficients
need be computed rather than t2 .
Advantage can be taken of the block diagonal form of all Qij . Each of them has the
following form

B1ij
0

B2ij

.
(42)
Qij =
...

Bnij

There are at most t2 1 unique Bkij for any Qij , these corresponding to the same number
of unique R
k . The B can be computed easily as follows. Let

R
k =

f11 f12 . . . f1t


f12 f22 . . . f2t
..
..
..
.
.
.
f1t f2t . . . ftt

(f1 f2 . . . ft ).

Then
Bkii = fi fi0
Bkij =

(fi fj0 )

(fi fj0 )0

(43)
for i 6= j.

(44)

In computing trQij R remember that Q and R have the same block diagonal form. This
computation is very easy for each of the n products. Let

Bkij =

b11 b12 . . . b1t


b12 b22 . . . b2t
..
..
..
.
.
.
b1t b2t . . . btt
12

Then the coefficient of rii contributed by the k th animal in the trace is bii . The coefficient
of rij is 2bij .
Finally note that we need only the n blocks of order t t down the diagonals of
WCW0 for trQij WCW0 . Partition C as

C=

Cxx Cx1 . . . Cxn


C0x1 C11 . . . C1n
..
..
..
.
.
.
0
0
Cxn C1n . . . Cnn

Then the block of WCW0 for the ith animal is


Xi Cxx X0i + Xi Cxi + (Xi Cxi )0 + Cii

(45)

and then zeroed for missing rows and columns, although this is not really necessary since
the Qkij are correspondingly zeroed. Xi is the submatrix of X pertaining to the ith animal.
This submatrix has order t p.
We illustrate some of these computations for r. First, consider computation of Qij .
Let us look at B211 , that is, the block for the second animal in Q11 .

.3059 .2000
.0706

.4000 .2000
R2 = .2000
.
.0706 .2000
.2471
Then

B211

.3059
.0936 .0612
.0216

.0400 .0141
= .2000 (.3059 .2000 .0706) =
.
.0706
.0050

Look at B323 , that is, the block for the third animal in Q23 .

0
0
0

.2692 .1538
R3 = 0
.
0 .1538
.2308
Then

B323

= .2692 (0 .1538 .2308) + transpose of this product


.1538

0
0
0

0
.0621
= 0 .0414
+ ()
0
.0237 .03500

0
0
0

.0858
= 0 .0828
.
0
.0858 .0710
13

Next we compute trQij R. Consider the contribution of the first animal to trQ12 R.

.1224

B112 =

.1624 .0753
.1600
.0682
.
.0282

Then this animal contributes


.1224 r11 + .2(.1624) r12 2(.0753) r13 .1600 r22 + 2(.0682) r23 .0282 r33 .
Finally we illustrate computing a block
animal.

0 0

X3 = 0 0
0 0

Cxx

of WCW0 by (26.45). We use the third

29.4578 9.2266 3.0591

3.1506 .6444

3.5851
=

Cx3 =

0 0 0
1 0 0
.
0 1 2

2.7324 .3894

.5254
.0750

2.3081
.0305
.
30.8055 8.5380

2.7720

1.7046 1.2370 1.0037

.2759
.2264
.1692

.6174 1.9083 1.2645


.
.8627 1.2434 4.7111

.0755 .0107
.7006

1.9642

C33 =

.9196 .9374
2.7786 1.8497
.
3.8518

Then the computations of (26.45) give

1.9642

.3022 .2257
2.5471 1.6895
.
4.9735

Since the first trait was missing on animal 3, the block of WCW0 becomes

0
0
2.5471 1.6895
.
4.9735

Combining these results, r for the first round is the solution to


.227128 .244706
.086367
.080000 .056471
.009965

.649412
.301176
.320000
.272941
.056471

.322215
.160000
.254118
.069758

.392485 .402840
.103669

.726892 .268653
.175331

14

= (.137802, .263298, .084767, .161811, .101820, .029331)0


+ (.613393, .656211, .263861, .713786, .895139, .571375)0 .
This gives the solution

3.727 1.295 .311


=
3.419 2.270
R

.
4.965

15

Chapter 27
Sire Model, Multiple Traits
C. R. Henderson
1984 - Guelph

Only One Trait Observed On A Progeny

This section deals with a rather simple model in which there are t traits measured on the
progeny of a set of sires. But the design is such that only one trait is measured on any
progeny. This results in R being diagonal. It is assumed that each dam has only one
recorded progeny, and the dams are non-inbred and unrelated. An additive genetic model
is assumed. Order the observations by progeny within traits. There are t traits and k
sires. Then the model is

y1
X1
0 ... 0
1

X2 . . . 0
y2
0
2
. = .

..
.. ..
.
.

.
.
.
. .
yt

Xt

Z1 0 . . . 0
0 Z2 . . . 0
..
..
..
.
.
.
0
0
Zt

s1
s2
..
.

st

e1
e2
..
.

(1)

et

yi represents ni progeny records on trait i, i is the vector of fixed effects influencing the
records on the ith trait, Xi relates i to elements of yi , and si is the vector of sire effects
for the ith trait. It has k has a null column corresponding to such a sire.

V ar

s1
s2
..
.

st

Ab11 Ab12 . . . Ab1t


Ab12 Ab22 . . . Ab2t

..
..
..
= G.
.
.
.
Ab1t Ab2t . . . Abtt

(2)

A is the k k numerator relationship matrix for the sires. If the sires were unselected,
bij = gij /4, where gij is the additive genetic covariance between traits i and j.

V ar

e1
e2
..
.
et

Id1
0 ... 0
0
Id2 . . . 0
..
..
..
.
.
.
0
0 . . . Idt
1

= R.

(3)

Under the assumption of unselected sires


di = .75 gii + rii ,
where rii is the ith diagonal of the error covariance matrix of the usual multiple trait
model. Then the GLS equations for fixed s are
0
d1
1 X 1 X1 . . .

..

0
d1
1 X 1 Z1 . . .
..
.

0
..
.

0
0
...
0
. . . d1
t X t Xt
1 0
1 0
0
d1 Z 1 Z1 . . .
d1 Z 1 X 1 . . .
..
..
..
.
.
.

0
. . . d1
t Z t Xt

0
Z
d1
X
t t
t

..

0
. . . d1
t Z t Zt

0
d1
1 X 1 y1

..

o1
.
.
.

0
..
.

ot
s1
..
.
st

0
d1
t X t yt
0
d1
1 Z 1 y1
..
.
0
d1
1 Z t yt

(4)

The mixed model equations are formed by adding G1 to the lower right (kt)2 submatrix
of (27.4), where

A1 b11 . . . A1 b1t

..
..
,
G1 =
(5)
.
.

1 1t
1 tt
A b
... A b
and bij is the ij th element of the inverse of

b11 . . . b1t
.
..
.
.
.
.
b1t . . . btt
With this model it seems logical to estimate di by
[y0 i yi ( oi )0 X0 i yi (uoi )0 Z0 i yi ]/[ni rank(Xi Zi )].

(6)

oi and uoi are some solution to (27.7)


X0 i X i X0 i Z i
Z0 i Xi Z0 i Zi

oi
uoi

X0 i yi
Z0 i yi

(7)

Then using these di , estimate the bij by quadratics in s, the solution to (27.4). The
quadratics needed are
s0i A1sj ;

i = 1, . . . , t;

j = i, . . . , t.

These are computed and equated to their expectations. We illustrate this section with a
small example. The observations on progeny of three sires and two traits are
2

Sire Trait
1
1
2
1
3
1
1
2
2
2

Progeny Records
5,3,6
7,4
5,3,8,6
5,7
9,8,6,5

Suppose X01 = [1 . . . 1] with 9 elements, and X02 = [1 . . . 1] with 6 elements.


1
1
1
0
0
0
0
0
0

Z1 =

0
0
0
1
1
0
0
0
0

0
0
0
0
0
1
1
1
1

, Z2 =

1
1
0
0
0
0

0
0
1
1
1
1

0
0
0
0
0
0

Suppose that
30 I9
0
0
25 I6

R=

1. .5 .5

A = .5 1. .25 ,
.5 .25 1.
and
b11 b12
b12 b22

3 1
1 2

Then

3. 1.5 1.5 1. .5 .5

3. .75 .5 1. .25

3.
.5
.25
1.

,
G=
2. 1. 1.

2. .5
2.
10 4 4 5
2
2

8
0
2 4
0

1
8
2
0
4

=
.

15 6 6

15

12
0
12

G1

X0 R1
Z0 R1

(X Z) =

1
150

45

0 15 10 20 0 0 0
36 0 0 0 12 24 0

15 0 0 0 0 0

10 0 0 0 0

20 0 0 0

12 0 0

24 0
0

(8)

Adding G1 to the lower 6 6 submatrix of (27.8) gives the mixed model coefficient
matrix. The right hand sides are [1.5667, 1.6, .4667, .3667, .7333, .48, 1.12, 0]. The
inverse of the mixed model coefficient matrix is

5.210
0.566
1.981
1.545
1.964
0.654
0.521
0.652

0.566 1.981 1.545 1.964 0.654 0.521 0.652


5.706 0.660 0.785 0.384 1.344 1.638 0.690

0.660
2.858
1.515
1.556
0.934
0.523
0.510

0.785
1.515
2.803
0.939
0.522
0.917
0.322

0.384
1.556
0.939
2.783
0.510
0.322
0.923

1.344
0.934
0.522
0.510
1.939
1.047
0.984

1.638
0.523
0.917
0.322
1.047
1.933
0.544
0.690
0.510
0.322
0.923
0.984
0.544
1.965

(9)

The solution to the MME is (5.2380, 6.6589, -.0950, .0236, .0239, -.0709, .0471, -.0116).

Multiple Traits Recorded On A Progeny

When multiple traits are observed on individual progeny, R is no longer diagonal. The
linear model can still be written as (27.1). Now, however, the yi do not have the same
number of elements, and Xi and Zi have varying numbers of rows. Further,

R=

I r11
P012 r12
..
.

P12 r12 . . .
I r22 . . .
..
.

P01t r1t

P02t r2t

P1t r1t
P2t r2t
..
.

(10)

. . . I rtt

The I matrices have order equal to the number of progeny with that trait recorded.

r11 . . .
.
.
.

r1t
..
.

r1t . . . rtt

is the error variance-covariance matrix. We can use the same strategy as in Chapter 25
for missing data. That is, each yi is the same length with 0s inserted for missing data.
4

Accordingly, all Xi and Zi have the same number of rows with rows pertaining to missing
observations set to 0. Further, R is the same as for no missing data except that rows
corresponding to missing observations are set to 0. Then the zeroed type of g-inverse of
R is

D11 D12 . . . D1t

D12 D22 . . . D2t


.
.
(11)
..
..

.
.
.
D1t D2t . . . Dtt
Each of the Dij is diagonal with order, n. Now the GLS equations for fixed s are
X0 1 D11 X1 . . .
.
.
.

X0 1 D1t Xt
..
.

X0 t D1t X1
Z0 1 D11 X1
..
.

. . . X0 t Dtt Xt
. . . Z0 1 D1t Xt
..
.

X0 t D1t Z1
Z0 1 D11 Z1
..
.

Z0 t D1t X1

. . . Z0 t Dtt Xt

Z0 t D1t Z1

o1
X0 1 D1t Zt

..
..
.
.
o
0

. . . X t Dtt Zt
t
0

1
. . . Z 1 D1t Zt
s
.
..
.
.
.

X0 1 D11 Z1 . . .
..
.

. . . Z0 t Dtt Zt

X0 1 D11 y1 + . . . + X0 1 D1t yt
..
..
.
.
X0 t D1t y1 + . . . + X0 t Dtt yt
Z0 1 D11 y1 + . . . + Z0 1 D1t yt
..
..
.
.
Z0 t D1t y1

+ . . . + Z0 t Dtt yt

st

With G1 added to the lower part of (27.12) we have the mixed model equations.
We illustrate with the following example.
Trait
Sire Progeny 1 2
1
1
6 5
2
3 5
3
- 7
4
8 2
5
4 6
6
- 7
7
3 3
8
5 4
9
8 We assume the same G as in the illustration of Section 27.1, and
r11 r12
r12 r22

=
5

30 10
10 25

(12)

We assume that the only fixed effects are 1 and 2 . Then using the data vector with
length 13, ordered progeny in sire in trait,
X01 = (1 1 1 1 1 1 1), X02 = (1 1 1 1 1 1),

Z1 =

1
1
1
0
0
0
0

0
0
0
1
1
0
0

0
0
0
0
0
1
1

, Z2 =

1
1
1
0
0
0

0
0
0
1
1
0

0
0
0
0
0
1

y10 = (6 3 8 4 3 5 8), y20 = (5 5 7 6 7 4),


and

R=

30

0
30

0
0
30

0
0
0
30

0
0
0
0
30

0
0
0
0
0
30

0 10 0
0 0 10
0 0 0
0 0 0
0 0 0
0 0 0
30 0 0
25 0
25

0 0
0 0
0 0
0 10
0 0
0 0
0 0
0 0
0 0
25 0
25

0 0
0 0

0 0

0 0

0 0

0 10

0 0
.

0 0

0 0

0 0

0 0

25 0
25

Then the GLS coefficient matrix for fixed s is in (27.13).

0.254
0.061
0.110
0.072
0.071
0.031
0.015
0.015

0.061
0.110
0.072
0.072 0.031 0.015 0.015
0.265 0.031 0.015 0.015
0.132
0.086
0.046

0.031
0.110
0.0
0.0
0.031
0.0
0.0

0.015
0.0
0.072
0.0
0.0
0.015
0.0

0.015
0.0
0.0
0.072
0.0
0.0
0.015

0.132 0.031
0.0
0.0
0.132
0.0
0.0

0.086
0.0
0.015
0.0
0.0
0.086
0.0
0.046
0.0
0.0
0.015
0.0
0.0
0.046

(13)

G1 is added to the lower 6 6 submatrix to form the mixed model coefficient matrix.
The right hand sides are (1.0179, 1.2062, .4590, .1615, .3974, .6031, .4954, .1077). The

inverse of the coefficient matrix is

6.065
1.607
2.111
1.735
1.709
0.702
0.602
0.546

1.607 2.111 1.735 1.709 0.702 0.603 0.546


5.323 0.711 0.625 0.519 1.472 1.246 1.017

0.711
2.880
1.533
1.527
0.953
0.517
0.506

0.625
1.533
2.841
0.893
0.517
0.938
0.303

.
0.519
1.527
0.893
2.844
0.506
0.303
0.944

1.472
0.953
0.517
0.506
1.939
1.028
1.003

1.246
0.517
0.938
0.303
1.028
1.924
0.562
1.017
0.506
0.303
0.943
1.002
0.562
1.936

(14)

The solution is [5.4038, 5.8080, .0547, -.1941, .1668, .0184, .0264, -.0356].
If we use the technique of including in y, X, Z, R, G the missing data we have
X01 = (1 1 0 1 1 0 1 1 1),

Z1 =

1
1
0
1
0
0
0
0
0

0
0
0
0
1
0
1
0
0

0
0
0
0
0
0
0
1
1

X02 = (1 1 1 0 1 1 0 1 0),

, Z2 =

1
1
1
0
0
0
0
0
0

0
0
0
0
1
1
0
0
0

0
0
0
0
0
0
0
1
0

y10 = (6, 3, 0, 8, 4, 0, 3, 5, 8),


and
y20 = (5, 5, 7, 0, 6, 7, 0, 4, 0).
D11 = diag[.0385, .0385, 0, .0333, .0385, 0, .0333, .0385, .0333]
D12 = diag[.0154, .0154, 0, 0, .0154, 0, 0, .0154, 0]
D22 = diag[.0462, .0462, .04, 0, .0462, .04, 0, .0462, 0].
This leads to the same set of equations and solution as when y has 13 elements.

Relationship To Sire Model With Repeated Records


On Progeny

The methods of Section 27.2 could be used for sire evaluation using progeny with repeated
records (lactations, e.g.), but we do not wish to invoke the simple repeatability model.
Then lactation 1 is trait 1, lactation 2 is trait 2, etc.
7

Chapter 28
Joint Cow and Sire Evaluation
C. R. Henderson
1984 - Guelph

At the present time, (1984), agencies evaluating dairy sires and dairy females have
designed separate programs for each. Sires usually have been evaluated solely on the
production records of their progeny. With the development of an easy method for computing A1 this matrix has been incorporated by some agencies, and that results in the
evaluation being a combination of the progeny test of the individual in question as well
as progeny tests of his relatives, eg., sire and paternal brothers. In addition, the method
also takes into account the predictions of the merits of the sires of the mates of the bulls
being evaluated. This is an approximation to the merits of the mates without using their
records.
In theory one could utilize all records available in a production testing program
and could compute A1 for all animals that have produced these records as well as
additional related animals without records that are to be evaluated. Then these could be
incorporated into a single set of prediction equations. This, of course, could result in a set
of equations that would be much too large to solve with existing computers. Nevertheless,
if we are willing to sacrifice some accuracy by ignoring the fact that animals change herds,
we can set up equations that are block diagonal in form that may be feasible to solve.

Block Diagonality Of Mixed Model Equations

Henderson (1976) presented a method for rapid calculation of A1 without computing A.


A remarkable property of A1 is that the only non-zero off-diagonal elements are those
pertaining to a pair of mates, and those pertaining to parent - progeny pairs. These
non-zero elements can be built up by entering the data in any order, with each piece of
data incorporating the individual identification number, the sire number, and the dam
number. At the same time one could enter with this information the production record
and elements of the incidence matrix of the individual. Now when the dam and her
progeny are in different herds, we pretend that we do not know the dam of the progeny
and if, when a natural service sire has progeny in more than one herd, we treat him
as a different sire in each herd, there are no non-zero elements of A1 between herds.
This strategy, along with the fact that most if not all elements of are peculiar to the
individual herd, results in the mixed model coefficient matrix having a block diagonal
form. The elements of the model are ordered as follows
1

0 : a subvector of common to all elements of y.


a0 : a subvector of a, additive genetic values, pertaining to sires used in several herds.
i (i = 1, . . . , number of herds): a subvector of pertaining only to records in the ith
herd.
ai : a subvector of a pertaining to animals in the ith herd. ai can represent cows with
records, or dams and non-AI sires of the cows with records. In computing A1 for
the animals in the ith herd the dam is assumed unknown if it is in a different herd.
When Sectiom 28.3 method is used (multiple records) no records of a cow should be
used in a herd unless the first lactation record is available. This restriction prevents
using records of a cow that moves to another herd subsequent to first lactation. With
this ordering and with these restrictions in computing A1 the BLUP equations have
the following form

C00 C01 C02 C0k


0
C01 C11 0
0
0
C02 0
C22 0
..
..
.
.
0
C0k 0
0
Ckk

0
1
2
..
.
k

r0
r1
r2
..
.
rk

0 = ( 0 a0 ),
0
0
0
i = ( i ai ).
Then with this form of the equations the herd unknowns can be absorbed into the 0
and a0 equations provided the Cii blocks can be readily inverted. Otherwise one would
need to solve iteratively. For example, one might first solve iteratively for 0 and a0 sires
ignoring i , ai . Then with these values one would solve iteratively for the herd values.
Having obtained these one would re-solve for 0 and the a0 values, adjusting the right
hand sides for the previously estimated herd values.
The AI sire equations would also contain values for the base population sires. A
base population dam with records would be included with the herd in which its records
were made. Any base population dam that has no records, has only one AI son, and has
no female progeny can be ignored without changing the solution.

Single Record On Single Trait

The simplest example of joint cow and sire evaluation with multiple herds involves a single
trait and with only one record per tested animal. We illustrate this with the following
example.
2

Base population animals


1 male
2 female with record in herd 1
3 female with record in herd 2
AI Sires
4 with parents 1 and 2
5 with parents 1 and 3
Other Females With Records
6 with unknown parents, record in herd 1
7 with unknown parents, record in herd 2
8 with parents 4 and 6, record in herd 1
9 with parents 4 and 3, record in herd 2
10 with parents 5 and 7, record in herd 2
11 with parents 5 and 2, record in herd 1
Ordering these animals (1,4,5,2,6,8,11,3,7,9,10) the A matrix is in (28.1).

1 .5 .5 0 0 .25
.25
0

1 .25 .5 0 .5
.375 0

1 0 0 .125
.5
.5

1
0
.25
.5
0

1 .5
0
0

1
.1875
0

1
.25

0 .25
.25
0
.5
.125

0 .375
.5

0 .25
0

0
0
0

0 .25 .0625

0 .3125 .25

0
.5
.25

1
0
.5

1
.1875
1

(1)

A1 shown in (28.2)
2 1 1 .5

3
0 1

3 .5

0
0
0 .5 0
0
0
.5 1
0 .5 0
1
0

0
0 1 1 .5
0 1

0
0 1
0 0
0
0

1.5 1
0
0 0
0
0

2
0
0 0
0
0

2
0 0
0
0

2 0
1
0

1.5
0 1

2
0
2

(2)

Note that the lower 8 8 submatrix is block diagonal with two blocks of order 4 4 down
the diagonal and 4 4 null off-diagonal blocks. The model assumed for our illustration is
yij = i + aij + eij ,
3

where i refers to herd and j to individual within herd. Then with ordering
(a1 , a4 , a5 , 1 , a2 a6 , a8 , a11 , 2 , a3 a7 , a9 , a10 )
the incidence matrix is as shown in (28.3). Note that o does not exist.

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0

1
1
1
1
0
0
0
0

1
0
0
0
0
0
0
0

0
1
0
0
0
0
0
0

0
0
1
0
0
0
0
0

0
0
0
1
0
0
0
0

0
0
0
0
1
1
1
1

0
0
0
0
1
0
0
0

0
0
0
0
0
1
0
0

0
0
0
0
0
0
1
0

0
0
0
0
0
0
0
1

(3)

Suppose y = [3,2,5,6,7,9,2,3] corresponding to animals 2,6,8,11,3,7,9,10. We assume that


h2 = .25 which implies e2 /a2 = 3. Then adding 3 A1 to appropriate elements of the
OLS equations we obtain mixed model equations displayed in (28.4).

6 3 3 0 1.5 0
0
0 0 1.5 0
0
0

9
0
0 3 1.5 3
0 0 1.5 0 3
0

9
0
1.5
0
0
3
0
3
1.5
0
3

4 1
1
1
1 0 0
0
0
0

7
0
0 3 0 0
0
0
0

5.5 3
0 0 0
0
0
0

7
0 0 0
0
0
0

7
0
0
0
0
0

4 1
1
1
1

7
0 3
0

5.5
0 3

7
0

a
1
a
4
a
5

1
a
2
a
6
a
8
a
11

2
a
3
a
7
a
9
a
10

0
0
0
16
3
2
5
6
21
7
9
2
3

(4)

Note that the lower 10 10 block of the coefficient matrix is block diagonal with 5 5
blocks down the diagonal and 5 5 null blocks off-diagonal. The solution to (28.4) is
[-.1738, -.3120, -.0824, 4.1568, -.1793, -.4102, -.1890, .1512, 5.2135, .0857, .6776, .5560, -.0611].
Note that the solution to (a1 , a4 , a5 ) could be found by absorbing the other equations as
follows.

4 1 1
1
1

0 1.5 0
0
0
7 0
0 3
6 3 3

9
0
5.5 3
0

0 3 1.5 3 0

9
0 1.5 0
0 3
7
0

0
0
0
1.5 3 1.5
0 1.5 0
0 3 0
0
0 3

4 1 1
1
1

7 0 3
0
0 1.5 0
0
0

0 1.5 0 3
5.5
0 3
0

7
0
0
3
1.5
0
3

0
0
0

1.5 1.5 3
0
0 1.5 0
0
0
a
1


0
0 1.5
0

a4 = 0 0 3 1.5 3

0 3 0
0
0
1.5
0
0
3
a

0
0 3

4 1

1
1
1

0
0 3

5.5 3
0

7
0

4 1

16
3
2
5
6

0 1.5 0
0
0
0 1.5 0 3
0

0 3 1.5
0 3

1
1
1

0 3
0

5.5
0 3

7
0

21
7
9
2
3

Iterations on these equations were carried out by two different methods. First, the herd
equations were iterated 5 rounds with AI sire values fixed. Then the AI sire equations were
iterated 5 rounds with the herd values fixed and so on. It required 17 cycles (85 rounds)
to converge to the direct solution previously reported. Regular Gauss-Seidel iteration
produced conversion in 33 rounds. The latter procedure would require more retrieval of
data from external storage devices.

Simple Repeatability Model

As our next example we use the same animals as before but now we have records as
follows.
Herd 1
Years
Cow 1 2 3
2
5 6 6
4 5 3
8
- 7 6
11
- - 8

Herd 2
Years
Cow 1 2 3
3
8 - 7
9 8 7
9
- 8 8
10
- - 7
6

We assume a model,
yijk = ij + aik + pik + eijk .
i refers to herd, j to year, and k to cow. It is assumed that h2 = .25, r = .45. Then
V ar(a) = A e2 /2.2.
V ar(p) = I e2 /2.75.
e2 /a2 = 2.2, e2 /p2 = 2.75

The diagonal coefficients of the p equations of OLS have added to them 2.75. Then p
can be absorbed easily. This can be done without writing the complete equations by
weighting each observation by
2.75
nik + e2 /2.75
where nik is the number of records on the ik th cow. These weights are .733, .579, .478
for 1,2,3 records respectively. Once these equations are derived, we then add 2.2 A1 to
appropriate coefficients to obtain the mixed model equations. The coefficient matrix is
in (28.5) . . . (28.7), and the right hand side vector is (0, 0, 0, 4.807, 9.917, 10.772, 6.369,
5.736, 7.527, 5.864, 10.166, 8.456, 13.109, 5.864, 11.472, 9.264, 5.131). The unknowns are
in this order (a1 , a4 , a5 , 11 , 12 , 13 , a2 , a6 , a8 , a11 , 21 , 22 , 23 , a3 , a7 , a9 , a10 ). Note
that block diagonality has been retained. The solution is
[.1956, .3217, .2214, 4.6512, 6.127, 5.9586, .2660, -.5509, .0045, .5004, 8.3515, 7.8439,
7.1948, .0377, .0516, .2424, .0892].
Upper 8 8

4.4 2.2 2.2

6.6
0

6.6

0
0
0
1.057

0
0
0
0
1.636

0
1.1
0
2.2
0
1.1
0
.579
0
.579
1.79
0
5.558

0
1.1
0
.478
.478
.478
0
4.734

(5)

Upper right 8 9 and (lower left 9 8)

0
0
0 0 0 1.1
0
0
0
2.2
0
0 0 0 1.1
0 2.2
0

0
2.2 0 0 0 2.2 1.1
0
2.2

0
0
0 0 0
0
0
0
0

.579
0
0 0 0
0
0
0
0

.579 .733 0 0 0
0
0
0
0

0
2.2 0 0 0
0
0
0
0
2.2
0
0 0 0
0
0
0
0
7

(6)

Lower right 9 9

5.558

0
5.133

0
0
1.211

0
0
0
1.057

0
0
0
0
1.79

0
0
.733
0
0
5.133

0
0
0
0
0
0

.478
0
0

.478 .579
0

.478 .579 .733

0
2.2
0

4.734
0
2.2

5.558
0
5.133

(7)

Multiple Traits

As a final example of joint cow and sire evaluation we evaluate on two traits. Using the
same animals as before the records are as follows.
Herd 1
Trait
Cow 1 2
2
6 8
6
4 6
8
9
11
3

Herd 2
Trait
Cow 1 2
3
7
7
2
9
8
10 6 9

We assume a model,
yijk = ij + aijk + eijk ,
where i refers to herd, j to trait, and k to cow. We assume that the error variancecovariance matrix for a cow and the additive genetic variance-covariance matrix for a
non-inbred individual are
!
!
5 2
2 1
and
,
2 8
1 3

respectively. Then R is

5 2 0 0

8 0 0

5 2

0
0
0
0
5

0
0
0
0
0
8

0
0
0
0
0
0
5

0
0
0
0
0
0
0
8

0
0
0
0
0
0
0
0
8

0
0
0
0
0
0
0
0
0
5

0
0
0
0
0
0
0
0
0
2
8

Ordering traits within animals, G is composed of 2 2 blocks as follows


2 aij
aij
aij 3 aij

The right hand sides of the mixed model equations are (0, 0, 0, 0, 0, 0, 3.244, 1.7639,
.8889, .7778, .5556, .6111, 1.8, 0, 0, .375, 2.3333, 2.1167, 1.4, 0, 0, .25, 0, 1., .8333, .9167)
corresponding to ordering of equations,
[a11 , a12 , a41 , a42 , a51 , a52 , 11 , 12 , a21 , a22 , a61 , a62 , a81 , a82 , a11,1 , a11,2 , 21 , 22 , a31 , a32 ,
a71 , a72 , a91 , a92 , a10,1 , a10,2 ].
The coefficient matrix is block diagonal with two 10 10 blocks in the lower diagonal
and with two 10 10 null blocks off-diagonal. The solution is
(.2087, .1766, .4469, .4665, .1661, .1912, 5.9188, 5.9184, .1351, .3356, -.1843, .1314, .6230,
.5448, -.0168, -.2390, 6.0830, 6.5215, .2563, .2734, -.4158, -.9170, .4099, .5450, -.0900,
.0718).

Summary Of Methods

The model to be used contains the following elements.


1. X0 0 : pertaining to all records
2. Xi i : pertaining to records only on the ith herd.
3. Z0 a0 : additive genetic values of sires used in several herds, AI sires in particular, but
could include natural service sires used in several herds.
9

4. Zi ai : additive genetic values of all females that have made records in the ith herd.
Some of these may be dams of AI sires. Others will be daughters of AI sires, and
some will be both dams and daughters of different AI sires. Zi ai will also contain
any sire with daughters only in the ith herd or with daughters in so few other herds
that this is ignored, and he is regarded as a different sire in each of the other herds.
One will need to decide how to handle such sires, that is, how many to include with
AI sires and how many to treat as a separate sire in each of the herds in which he
has progeny.
5. A1 should be computed by Hendersons simple method, possibly ignoring inbreeding
in large data sets, since this reduces computations markedly. In order to generate
block diagonality in the mixed model equations the elements of A1 for animals in
Zi ai should be derived only from sires in a0 and from dams and sires in ai (same
herd). This insures that there will be no non-zero elements of A1 between any pair
of herds, provided ordering is done according to the following

(1)
(2)
(3)
(4)
(5)
(6)

X0 0
Z0 a0
X1 1
Z1 a1
X2 2
Z2 a2
..
.
etc.

Gametic Model To Reduce The Number Of Equations

Quaas and Pollak (1980) described a gametic additive genetic model that reduces the
number of equations needed for computing BLUP. The only breeding values appearing
in the equations are those of animals having tested progeny. Then individuals with no
progeny can be evaluated by taking appropriate linear functions of the solution vector.
The paper cited above dealt with multiple traits. We shall consider two situations, (1)
single traits with one or no record per trait and (2) single traits with multiple records and
the usual repeatability model assumed. If one does not choose to assume the repeatability
model, the different records in a trait can be regarded as multiple traits and the Quaas
and Pollak method used.
10

6.1

Single record model

Let the model be


y = X + Za a + other possible random factors + e.
There are b animals with tested progeny, and c b of these parents are tested. There
are d tested animals with no progeny. Thus y has c + d elements. In the regular mixed
model a has b + d elements. Za is formed from an identity matrix of order b + d and then
deleting b c rows corresponding to parents with no record.
V ar(a) = Aa2 ,
V ar(e) = Ie2 ,
Cov(a, e0 ) = 0.
Now in the gametic model, which is linearly equivalent to the model above, a has
only b elements corresponding to the animals with tested progeny. As before y has c + d
elements, and is ordered such that records of animals with progeny appear first.
P
Q

Za =

P is a c b matrix formed from an identity matrix of order, b, by deleting b c rows


corresponding to parents without a record. Q is a d b matrix with all null elements
except the following. For the ith individual .5 is inserted in the ith row of Q in columns
corresponding to its parents in the a vector. Thus if both parents are present, the row
contains two .5s. If only one parent is present, the row contains one .5. If neither
parent is present, the row is null. Now, of course, A has order, b, referring to those animals
with tested progeny. V ar(e) is no longer Ie2 . It is diagonal with diagonal elements as
follows for noninbred animals.
(1) e2 for parents.
(2) e2 + .5 a2 for progeny with both parents in a.
(3) e2 + .75 a2 for progeny with one parents in a.
(4) e2 + a2 for progeny with no parent in a.
This model results in d less equations than in the usual model and a possible large
reduction in time required for a solution to the mixed model equations.
Computation of a
i , BLUP of a tested individual not in the solution for a but providing
data in y, is simple.
0

.5 (Sum of parental a
).
ei = yi xi o zi u
11

xi is the incidence matrix for the ith animal with respect to .


, other random factors
z0i is the incidence matrix for the ith animal with respect to u
in the model. Then
) + ki ei ,
a
i = .5 (sum of parental a
2
2
2
where ki = .5 a /(.5 a + e ) if both parents known,
= .75 a2 /(.75 a2 + e2 ) if one parent known,
= a2 /(a2 + e2 ) if neither parent known.
),
The solution for an animal with no record and no progeny is .5 (sum of parental a
in the solution.
provided these parents, if known, are included in the b elements of a
A simple sire model for single traits can be considered a special case of this model.
The incidence matrix for sires is the same as in Chapter 23 except that it is multipled
by .5. The error variance is I(e2 + .75 a2 ). The G submatrix for sires is Aa2 rather
than .25 a2 A. Then the evaluations from this model for sires are exactly twice those of
Chapter 23.
A sire model containing sires of the mates but not the mates records can be formulated by the gametic model. Then a would include both sires and grandsires. The
incidence matrix for a progeny would contain elements .5 associated with sire and .25
associated with grandsire. Then the error variance would contain e2 + .6875 a2 , e2 +
.75 a2 , or e2 + .9375 a2 for progeny with both sire and grandsire, sire only, or grandsire
only respectively.
We illustrate the methods of this section with a very simple example. Animals 1, . . . , 4
have records (5,3,2,8). X0 = (1 2 1 3). Animals 1 and 2 are the parents of 3, and animal
1 is the parent of 4. The error variance is e2 = 10 and a2 = 4. We first treat this as an
individual animal model where
1 0 .5 .5

1 .5 0

A=

1 .25
1

The mixed model equations are

1.5

.1
.2
.1
.3

.558333 .125 .25 .166667


u

.475 .25
0
2 =
u

.6
0
3
u
.433333
u4

The solution is
12

3.7
.5
.3
.2
.8

(8)

(2.40096, .73264, -.57212, .00006, .46574).


Now in the gametic model the incidence matrix is

1
2
1
3

1
0
.5
.5

0
1
.5
0

G=

4 0
0 4

, R = dg (10, 10, 12, 13).

12 = 10 + .5(4), 13 = 10 + .75(4).
Then the mixed model equations are

1.275641 .257051 .241667

.390064 .020833

.370833

3.112821

u1 = .891026 .
.383333
u2

(9)

The solution is
(2.40096, .73264, .57212).

(10)

This is the same as the first 3 elements of (28.8).


e3
u3
e4
u4

=
=
=
=

2 2.40096 .5(.73264 .57212) = .48122.


.5(.73264 .57212) + 2(.48122)/12 = .00006.
8 3(2.40096) .5(.73264) = .43080.
.5(.73264) + 3(.43080)/13 = .46574.

u3 , u4 are the same as in (28.8).

6.2

Repeated records model

This section is concerned with multiple records in a single trait and under the assumption
that

yi1
1 r r

yi2
r 1 r 2
=

V ar
yi3
r r 1 y ,

..
.. .. ..
.
. . .
where y has been adjusted for random factors other than producing ability and random
error. The subscript i refers to a particular animal. The model is
y = X + Za a + Zp p + possibly other random factors + e.
13

Aa2 0
0
a

2
V ar p = 0 Ip 0 .
e
0
0 Ie2

In an unselected population a2 = h2 y2 , p2 = (r h2 )y2 , e2 = (1 r)y2 , after adjusting y


for other random factors. As before b animals have progeny; c b of these have records.
These records number n1 . Also as before d animals with records have no progeny. The
number of records made by these animals is n2 .
First we state the model as described in Chapter 24. X, Za , Zb all have n1 + n2 rows.
The number of elements in a is b + d. The Za matrix is the same as in the conventional
model of Section 28.6.1 except that the row pertaining to an individual with records is
repeated as many times as there are records on that animal. The number of elements in p
is c + d corresponding to these animals with records. Zp would be an identity matrix with
order c + d if the c + d animals with records had made only one record each. Then the
row of this matrix corresponding to an animal is repeated as many times as the number
0
0
can be absorbed easily to
of records in that animal. Since Zp Zp + (Ip2 )1 is diagonal, p
reduce the number of equations to b + d plus the number of elements in . The predicted
real producing ability of the ith animal is a
i + pi , with pi = 0 for animals with no records.
Now we state the gametic model for repeated records. As for single records, a now
has b elements corresponding to the b animals with progeny. Za is exactly the same as
in the gametic model for single records except that the row pertaining to an animal is
repeated as many times as the number of records for that animal. As in the conventional
method for repeated records, p has c + d elements and Zp is the same as in that model.
Now Mendelian sampling is taken care of in this model by altering V ar(p) rather
than V ar(e) as was done in the single record gametic model. For the parents V ar(p)
remains diagonal with the first c diagonals being p2 . The remaining d have the following
possible values.
(1) p2 + .5a2 if both parents are in a,
(2) p2 + .75a2 if one parent is in a,
(3) p2 + a2 if no parent is in a.
Again we can absorb p to obtain a set of equations numbering b plus the number of
elements in , a reduction of c from the conventional equations. The computation of a

for the d animals with no progeny is simple.


) + ki pi .
a
i = .5(sum of parental a
2
2
2
where ki = .5 a /(p + .5a ) for animals with 2 parents in a.
= .75a2 /(p2 + .75a2 ) for those with one parent in a.
= a2 /(p2 + a2 ) for those with no parent in a.

14

These two methods for repeated records are illustrated with the same animals as in
Section 28.8 except now there are repeated records. The 4 animals have 2,3,1,2 records
respectively. These are (5,3,4,2,3,6,7,8). X0 = (1 2 3 1 2 2 3 2). Let
a2 = .25,
p2 = .20,
e2 = .55.
Then the regular mixed model equations are in (28.11).

65.455

5.455 10.909 3.636 9.091 5.455 10.909 3.636 9.091


10.970
2.0
4.0 2.667 3.636
0
0
0

11.455 4.0
0
0
5.455
0
0

9.818
0
0
0
1.818
0

8.970
0
0
0
3.636

8.636
0
0
0

10.455
0
0

6.818
0
8.636

a
=

145.455
14.546
16.364
10.909
27.273
14.546
16.364
10.909
27.273

(11)

and p
are identical. The solution is
Note that the right hand sides for a
(1.9467, .8158, .1972, .5660, 1.0377, .1113, .3632, .4108, .6718).
Next the solution for the gametic
incidence matrix is

1 1

2 1

3 0

1 0

2 0

2 .5

3 .5
2 .5

(12)

model is illustrated with the same data. The


0
0
1
1
1
.5
0
0

1
1
0
0
0
0
0
0
15

0
0
1
1
1
0
0
0

0
0
0
0
0
1
0
0

0
0
0
0
0
0
1
1

corresponding to , a1 , a2 , p1 , p2 , p3 , p4 .
V ar(e) = .55 I,
!
.25 0
V ar(a) =
,
0 .25
V ar(p) = diag (.2, .2, .325, .3875).
Then the mixed model equations are

65.454 11.818 12.727 5.454 10.909 3.636 9.091

9.0
.454 3.636
0
.909 1.818

9.909
0
5.454 .909
0

8.636
0
0
0

10.454
0
0

4.895
0

6.217

145.454

a1
33.636

a
21.818
2

p1 = 14.546

p2
16.364

10.909
p3

27.273
p4

(13)

is diagonal. The solution is


Note that the coefficient submatrix for p
(1.9467, .8158, .1972, .1113, .3632, .6676, 1.3017).
a
Note that ,
1 , a
2 , p1 , p2 are the same as in (28.12). Now
a
3 = .5(.8158 .1972) + .125 (.6676)/.325 = .5660.
a
4 = .5(.8158) + .1875(1.3017)/.3875 = 1.0377.

These are the same results for a


3 and a
3 as (28.12).

16

(14)

Chapter 29
Non-Additive Genetic Merit
C. R. Henderson
1984 - Guelph

Model for Genetic Components

All of the applications in previous chapters have been concerned entirely with additive
genetic models. This may be a suitable approximation, but theory exists that enables
consideration to be given to more complicated genetic models. This theory is simple for
non-inbred populations, for then we can formulate genetic merit of the animals in a sample
as
X
g=
gi .
i

g is the vector of total genetic values for the animals in the sample. gi is a vector
describing values for a specific type of genetic merit. For example, g1 represents additive
values, g2 dominance values, g3 additive additive, g4 additive by dominance, etc. In a
non-inbred, unselected population and ignoring linkage
Cov(gi , gj0 ) = 0
for all pairs of i 6= j.
V ar(additive) = Aa2 ,
V ar(dominance) = Dd2 ,
2
V ar(additive additive) = A#Aaa
,
2
V ar(additive dominance) = A#Dad
,
2
V ar(additive additive dominance) = A#A#Daad
, etc.

The # operation on A and D is described below. These results are due mostly to Cockerham (1954). D is computed as follows. All diagonals are 1. dkm (k 6= m) is computed
from certain elements of A. Let the parents of k and m be g, h and i, j respectively. Then
dkm = .25(agi ahj + agj ahi ).

(1)

In a non-inbred population only one at most of the products in this expression can be
greater than 0. To illustrate suppose k and m are full sibs. Then g = i and h = j.
1

Consequently
dkm = .25[(1)(1) + 0] = .25.
Suppose k and m are double first cousins. Then
dkm = .25[(.5)(.5) + 0] = .0625.
For non-inbred paternal sibs from unrelated dams is
dkm = .25[1(0) + 0(0)] = 0,
and for parent-progeny dkm = 0.
The # operation on two matrices means that the new matrix is formed from the
products of the corresponding elements of the 2 matrices. Thus the ij th element of A#A
is a2ij , and the ij th element of A#D is aij dij . These are called Hadamard products.
Accordingly, we see that all matrices for V ar(gi ) are derived from A.

Single Record on Every Animal

We shall describe BLUP procedures and estimation of variances in this and subsequent
sections of Chapter 29 by a model with additive and dominance components. The extension to more components is straightforward. The model for y with no data missing
is

y = (X I I) a + e .
d
y is n x 1, X is n x p, both I are n x n, and e is n x 1, is p x 1, a and d are n x 1.
V ar(a) = Aa2 ,
V ar(d) = Dd2 ,
V ar(e) = Ie2 .
Cov(a, d0 ), Cov(a, e0 ), and Cov(d, e0 ) are all n n null matrices. Now the mixed model
equations are
o
X0 X
X0
X0
X0 y

1 2
2
= y
I + A e /a
I
X
a
.
1 2
2

X
I
I + D e /d
y
d

(2)

Note that if a, d were regarded as fixed, the last n equations would be identical to the
p + 1, . . . , p + n equations, and we could estimate only differences among elements of

Subtracting the third equation


and d.
a + d. An interesting relationship exists between a
of (29.2) from the second,
= 0.
D1 e2 /d2 d
A1 e2 /a2 a
Therefore
= DA1 2 / 2 a
d
a .
d

(3)

This identity can be used to reduce (29.2) to


X0 X
X0 (I + DA1 d2 /a2 )
X I + A1 e2 /a2 + DA1 d2 /a2

X0 y
y

(4)

in (29.4)
Note that the coefficient matrix of (29.4) is not symmetric. Having solved for a

compute d by (29.3).
a2 , d2 , e2 can be estimated by MIVQUE. Quadratics needed to be computed and
equated to their expectations are
0 D1 d,
and e
0 A1 a
, d
0 e
.
a

(5)

To obtain expectations of the first two of these we need V ar(r), where r is the vector of
right hand sides of (29.2). This is
X0 AX X0 A X0 A
X0 DX X0 D X0 D

2
2
A
A a + DX
D
D
AX
d
AX
A
A
DX
D
D

X0 X X 0 X 0

2
I I
+ X
e .
X
I I

(6)

as follows. Let some g-inverse of the matrix


From (29.6) we can compute V ar(
a), V ar(d)
of (29.2) be

C = Ca .
Cd
Ca and Cd each have n rows. Then
= Ca r,
a
and
= Cd r.
d
V ar(
a) = Ca V ar(r)C0a ,

(7)

= Cd V ar(r)C0 .
V ar(d)
d

(8)

and

) = trA1 V ar(
E(
a0 A1 a
a).
= trD1 V ar(d).

0 D1 d)
E(d

(9)
(10)

0 e
we compute tr(V ar(
For the expectation of e
e)). Note that
X0

= I (X I I) C I
e
y
I
= (I XC11 X0 XC12 C012 X0 XC13 C013 X0
C22 C23 C023 C33 )y
Ty

where

(11)

C11 C12 C13

C
C=
012 C22 C23 .
C013 C023 C33

(12)

V ar(
e) = T V ar(y)T0 .

(13)

V ar(y) = Aa2 + Dd2 + Ie2 .

(14)

Then

REML by the EM type algorithm is quite simple to state. At each round of iteration
V ar(
we need the same quadratics as in (29.5). Now we pretend that V ar(
a), V ar(d),
e)
are represented by the mixed model result with true variance ratios employed. These are
V ar(
a) = Aa2 C22 .
= D 2 C33 .
V ar(d)
d
V ar(
e) = Ie2 WCW0 .
C22 , C33 , C are defined in (29.12).
W = (X I I).

WCW0 can be written as I T = XC11 X0 + XC12 + etc. From these variances


we iterate on
1
+ trA1 C22 )/n,

a2 = (
a0 A a
(15)
1
0D d
+ trD1 C33 )/n,

d2 = (d
(16)
and
+ trWCW0 )/n.

e2 = (
e0 e

(17)

This algorithm guarantees that at each round of iteration all estimates are non-negative
provided the starting values of e2 /a2 , e2 /d2 are positive.
4

Single or No Record on Each Animal

In this section we use the same model as in Section 29.2, except now some animals have
no record but we wish to evaluate them in the mixed model solution. Let us order the
animals by the set of animals with no record followed by the set with records.

y = (X 0 I 0

I)

am
ap
dm
dp

+e.

(18)

The subscript, m, denotes animals with no record, and the subscript, p, denotes animals
with a record. Let there be np animals with a record and nm animals with no record.
Then y is np x 1, X is np x p, the 0 submatrices are both np x nm , and the I submatrices
are both np x np . The OLS equations are

X0 X
0
X
0
X0

0 X0
0 0
0 I
0 0
0 I

o
0 X0


am

0 0

p =
0 I a

0 0

d
m
p
0 I
d

X0 y
0
y
0
y

(19)

The mixed model equations are formed by adding A1 e2 /a2 and D1 e2 /d2 to the appropriate submatrices of matrix (29.19).
We illustrate these equations with a simple example. We have 10 animals with animals
1,3,5,7 not having records. 1,2,3,4 are unrelated, non-inbred animals. The parents of 5
and 6 are 1,2. The parents of 7 and 8 are 3,4. The parents of 9 are 6,7. The parents of
10 are 5,8. This gives

1 0 0 0 .5 .5 0 0

1 0 0 .5 .5 0 0

1 0 0 0 .5 .5

1 0 0 .5 .5

1 .5 0 0
A=

1 0 0

1 .5

.25
.25
.25
.25
.25
.5
.5
.25
1

D = matrix with all 1s in diagonal,


d56 = d65 = d78 = d87 = .25,
5

.25
.25
.25
.25
.5
.25
.25
.5
.25
1

d9,10 = d10,9 = .0625,


and all other elements = 0.
y0 = [6, 9, 6, 7, 4, 6].
X0 = (1 1 1 1 1 1).
Assuming that e2 /a2 = 2.25 and e2 /d2 = 5, the mixed model coefficient matrix with
animals ordered as in the A and D matrices is in (29.20) . . . (29.22). The right hand side
vector is [38, 0, 6, 0, 9, 0, 6, 0, 7, 4, 6, 0, 6, 0, 9, 0, 6, 0, 7, 4, 6]0 . The solution is
o = 6.400,
0 = [.203, .256, .141, .600, .259, .403, .056, .262, .521, .058],
a
and
0 = (0, .024, 0, .333, 0, 0, .014, .056, .316, .073).
d
Upper left 11 11

6.

0
1.
4.5 2.25
5.5

0
1.
0
1.
0
1.
1.
1.
0
0 2.25 2.25
0
0
0
0
0
0 2.25 2.25
0
0
0
0
4.5 2.25
0
0 2.25 2.25
0
0
5.5
0
0 2.25 2.25
0
0
5.625
0
0 1.125
0 2.25
6.625 1.125
0 2.25
0
5.625
0 2.25
0
6.625
0 2.25
5.5
0
5.5

(20)

Upper right 10 10 and (lower left 10 11)0

0
0
0
0
0
0
0
0
0
0
0

1
0
1
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0

1
0
0
0
1
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0

1
0
0
0
0
0
1
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0

1
0
0
0
0
0
0
0
1
0
0

1
0
0
0
0
0
0
0
0
1
0

1
0
0
0
0
0
0
0
0
0
1

(21)

Lower right 10 10

5.0

0
6.0

0
0
5.0

0
0
0
6.0

0
0
0
0
0
0
0
0
5.333 1.333
6.333

0
0
0
0
0
0
0
0
0
0
0
0
5.333 1.333
6.333

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6.02 .314
6.02

(22)

If we wish EM type estimation of variances we iterate on


p )/[n rank (X)],
p y0 d

e2 = (y0 y y0 X o y0 a
2
0 1
2
+ tr

a = (
aA a
e Caa )/n,

and
0 D1 d
+ tr

d2 = (d
e2 Cdd )/n,
for
0 = (
0p ),
a
a0m a
0 = (d
0 d
0 ),
d
m
p
and n = number of animals. A g-inverse of (29.19) is

Cxx Cxa Cxd


0

Cxa Caa Cad .


C0xd C0ad Cdd
Remember that in these computations V ar(e) = Ie2 and the equations are set up with
scaling, V ar(e) = I, V ar(a) = Aa2 /e2 , V ar(d) = Dd2 /e2 .

A Reduced Set of Equations

When there are several genetic components in the model, a much more efficient computing
strategy can be employed than that of Section 29.3. Let m be total genetic value of the
members of a population, and this is
m=

X
i

gi ,

where gi is the merit for a particular type of genetic component, additive for example.
Then in a non-inbred population and ignoring linkage
V ar(m) =

V ar(gi )

since
Cov(gi , gj0 ) = 0
for all i 6= j. Then a model is
y = X + Zm m + e.

(23)

We could, if we choose, add a term for other random components. Now mixed model
equations for BLUE and BLUP are
X0 R1 X
X0 R1 Zm
Z0m R1 X Z0m R1 Zm + [V ar(m)]1

X0 R1 y
Z0m R1 y

(24)

If we are interested in BLUP of certain genetic components this is simply


i = V ar(gi )[V ar(m)]1 m.

(25)

This method is illustrated by the example of Section 29.2. Except for scaling
V ar(e) = I,
V ar(a) = 2.251 A,
V ar(d) = 51 D.
Then
V ar(m) = 2.251 A + 51 D

.6444

0
.6444

0
0
.6444

0 .2222 .2222
0
0
0 .2222 .2222
0
0
0
0
0 .2222 .2222
.6444
0
0 .2222 .2222
.6444 .2722
0
0
.6444
0
0
.6444 .2722
.6444

.1111
.1111
.1111
.1111
.1111
.2222
.2222
.1111
.6444

.1111
.1111
.1111
.1111
.2222
.1111
.1111
.2222
.1236
.6444

(26)

Adding the inverse of this to the lower 10 10 block of the OLS equations of (29.27) we
obtain the mixed model equations. The OLS equations including animals with missing

records are

6 0 1 0

0 0 0

1 0

1
0
0
0
1

0
0
0
0
0
0

1
0
0
0
0
0
1

0
0
0
0
0
0
0
0

1
0
0
0
0
0
0
0
1

1
0
0
0
0
0
0
0
0
1

1
0
0
0
0
0
0
0
0
0
1

38
0
6
0
9
0
6
0
7
4
6

(27)

The resulting solution is

= 6.400 as before, and


= [.203, .280, .141, .933, .259, .402, .070, .319, .837, .131]0 .
m
as
and using the method of (29.25) we obtain the same solution to a
and d
From m
before.
To obtain REML estimates identical to those of Section 29.4 compute the same
quantities except
e2 can be computed by

e2 = (y0 y y0 X o y0 Zm m)/[n
rank (X)].
are computed from m
and d
as described in this section. With the scaling done
Then a
G = Aa2 /e2 + Dd2 /e2 ,
Caa = (a2 /e2 )A (a2 /e2 )AG1 (G Cmm )G1 A(a2 /e2 ),
Cdd = (d2 /e2 )D (d2 /e2 )DG1 (G Cmm )G1 D(d2 /e2 ),
where a g-inverse of the reduced coefficient matrix is
Cxx Cxm
C0xm Cmm

In our example Caa for both the extended and the reduced equations is
.4179 .0001 .0042
.0224

.3651 .0224
.0571

.4179 .0001

.3651

.2057
.1847
.0112
.0428
.4100

.1856
.1802
.0196
.0590
.1862
.3653

.0112
.0428
.2057
.1847
.0287
.0365
.4100

.0196
.0590
.1856
.1802
.0365
.0618
.1862
.3653

.0942
.1176
.1062
.1264
.1108
.1953
.2084
.1305
.3859

.1062
.1264
.0942
.1176
.2084
.1305
.1108
.1953
.1304
.3859

Similarly Cdd is

.2

0
.1786

0
0
0
0
0 .0034 .0016 .0064
.2
0
0
0
.1786 .0008 .0030
.1986 .0444
.1778

0
0
0
0
0
0
.2

0
.0030
0
.0064
.0007
.0027
0
.1778

0
.0045
0
.0047
.0016
.0062
0
.0047
.1778

0
.0047
0
.0045
.0011
.0045
0
.0062
.0136
.1778

Caa and Cdd have rather large rounding errors.

Multiple or No Records

Next consider a model with repeated records and the traditional repeatability model.
That is, all records have the same variance and all pairs of records on the same animal
have the same covariance. Ordering the animals with no records first the model is
y = [X 0 Z 0 Z Z]( : am : ap : dm : dp t)0 + e.

(28)

y is n x 1, X is n x p, the null matrices are n x nm , Z is n x np . n is the number of records,


nm the number of animals with no record, and np the number of animals with 1 or more
records. am , ap refer to a for animals with no records and with records respectively, and
similarly for dm and dp . t refers to permanent environmental effects for animals with
records.
V ar(a) = Aa2 ,
V ar(d) = Dd2 ,
V ar(t) = It2 ,
V ar(e) = Ie2 .
These 4 vectors are uncorrelated. The OLS equations are

X0 X
0
Z0 X
0
Z0 X
Z0 X

0
0
0
0
0
0

X0 Z
0
Z0 Z
0
Z0 Z
Z0 Z

0
0
0
0
0
0

X0 Z
0
Z0 Z
0
Z0 Z
Z0 Z

X0 Z
0
Z0 Z
0
Z0 Z
Z0 Z
10

o
m
a
p
a
m
d
p
d
t

X0 y
0
Z0 y
0
Z0 y
Z0 y

(29)

The mixed model equations are formed by adding A1 e2 /a2 , D1 e2 /d2 , and Ie2 /t2 to
appropriate blocks in (29.29).
We illustrate with the same 10 animals as in the preceding section, but now there are
multiple records as follows.
Records
Animals 1 2 3
1
X X X
2
6 5 4
3
X X X
4
9 8 X
5
X X X
6
6 5 6
7
X X X
8
7 3 X
9
4 5 X
10
6 X X
X denotes no record. We assume that the first records have a common mean 1 , the
second a common mean 2 , and the third a common mean 3 . It is assumed that e2 /a2
= 1.8, e2 /d2 = 4, = e2 /t2 = 4. Then the mixed model coefficient matrix is in (29.30)
. . . (29.32). The right hand side vector is (38, 26, 10, 0, 15, 0, 17, 0, 17, 0, 10, 9, 6, 0, 15,
0, 17, 0, 17, 0, 10, 9, 6, 15, 17, 17, 10, 9, 6). The solution is
0

o = (6.398, 5.226, 5.287),


0 = (.067, .295, .364, .726, .201, .228, .048, .051,
a
.355, .166),
0
= (0, .103, 0, .491, .019, .077, .048, .190, .241,
d
.051),
0
t = (.103, .491, .077, .190, .239, .036).
t refers to the six animals with records. BLUP of the others is 0.

11

Upper left 13 13

6.0

0
5.0

0
0
2.0

0
0
0
3.6

1.0
1.0
1.0
1.8
6.6

0 1.0
0
1.0
0
1.0
1.0
1.0
0 1.0
0
1.0
0
1.0
1.0
0

0
0
0
1.0
0
0
0
0

0
0 1.8 1.8
0
0
0
0

0
0 1.8 1.8
0
0
0
0

3.6 1.8
0
0 1.8 1.8
0
0

5.6
0
0 1.8 1.8
0
0

4.5
0
0
.9
0 1.8

7.5
.9
0 1.8
0

4.5
0 1.8
0

6.5
0 1.8

5.6
0
4.6

(30)

Upper right 13 16 and (lower left 16 13)

0
0
0
0
0
0
0
0
0
0
0
0
0

1
1
1
0
3
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0

1
1
0
0
0
0
2
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0

1
1
1
0
0
0
0
0
3
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0

1
1
0
0
0
0
0
0
0
0
2
0
0

1
1
0
0
0
0
0
0
0
0
0
2
0

12

1
0
0
0
0
0
0
0
0
0
0
0
1

1
1
1
0
3
0
0
0
0
0
0
0
0

1
1
0
0
0
0
2
0
0
0
0
0
0

1
1
1
0
0
0
0
0
3
0
0
0
0

1
1
0
0
0
0
0
0
0
0
2
0
0

1
1
0
0
0
0
0
0
0
0
0
2
0

1
0
0
0
0
0
0
0
0
0
0
0
1

(31)

Lower right 16 16

4 0 0 0

7 0 0

4 0

0
0
0
0
0
0
0
0
4.267 1.067
7.267

0
0
0
0
0
0
0
0
0
0
0
0
4.267 1.067
6.267

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6.016 .251
5.016

0
3
0
0
0
0
0
0
0
0
7

0
0
0
2
0
0
0
0
0
0
0
6

0
0
0
0
0
0
0
0
0
0
0
0
7

0
0
0
0
0
0
0
2
0
0
0
0
0
6

0
0
0
0
0
0
0
0
2
0
0
0
0
0
6

0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
5

(32)

A Reduced Set of Equations for Multiple Records

As in Section 29.4 we can reduce the equations by now letting


m=

gi + t,

where gi have the same meaning as before, and t is permanent environmental effect with
one
V ar(t) = It2 . Then the mixed model equations are like those of (29.24) and from m

i and t.
can compute g
Using the same example as in Section 29.5 the OLS equations are

6 0 0 0

5 0 0

2 0

1
1
1
0
3

0
0
0
0
0
0

1
1
0
0
0
0
2

0
0
0
0
0
0
0
0

1
1
1
0
0
0
0
0
3

0
0
0
0
0
0
0
0
0
0

13

1
1
0
0
0
0
0
0
0
0
2

1
1
0
0
0
0
0
0
0
0
0
2

1
0
0
0
0
0
0
0
0
0
0
0
1

38
26
10
0
15
0
17
0
17
0
10
9
6

Now with scaling V ar(e) = I.


V ar(m) = 1.81 A + .25D + .25I

1
576

608

0
608

0
0
608

0 160 160
0
0 80 80

0 160 160
0
0 .80 80

0
0
0 160 160 80 80

608
0
0 160 160 80 80

608 196
0
0 80 160
.
608
0
0 160 80

608 196 160 80

608 80 160

608 89

608

Adding the inverse of this to the lower 10 10 block of the OLS equations we obtain the
mixed model equations. The solution is
(
1 ,
2 ,
3 ) = (6.398, 5.226, 5.287),
the same as before, and
= (.067, .500, .364, 1.707, .182, .073, .001,
m
.431, .835, .253)0 .
Then
= V ar(a)[V ar(m)]1 m
= same as before.
a
= V ar(d)[V ar(m)]1 m
= same as before.
d
t = V ar(t)[V ar(m)]1 m
= same as before
recognizing that ti for an animal with no record is 0.
To compute EM type REML iterate on
e2 = [y0 y (soln. vector)0 rhs]/[n rank(X)].
Compute Caa , Cdd , Ctt as in Section 29.4. Now, however, Ctt will have dimension, 10,
rather than 6 in order that the matrix of the quadratic in t at each round of iteration will
be I. If we did not include missing ti , a new matrix would need to be computed at each
round of iteration.

14

Chapter 30
Line Cross and Breed Cross Analyses
C. R. Henderson
1984 - Guelph

This chapter is concerned with a genetic model for line crosses, BLUP of crosses, and
estimation of variances. It is assumed that a set of unselected inbred lines is derived from
some base population. Therefore the lines are assumed to be uncorrelated.

Genetic Model

We make the assumption that the total genetic variance of a population can be partitioned into additive + dominance + (additive additive) + (additive dominance),
etc. Further, in a non-inbred population these different sets of effects are mutually uncorrelated, e.g., Cov (additive, dominance) = 0. The covariance among sets of effects can
be computed from the A matrix. Methods for computing A are well known. D can be
computed as described in Chapter 29.
V ar(additive effects) = Aa2 .
V ar(dominance effects) = Dd2 .
2
.
V ar(additive dominance) = A#Dad
2
, etc.
V ar(additive additive dominance) = A#A#Daad

# denotes the operation of taking the product of corresponding elements of 2 matrices.


Thus the ij th element of A#D is aij dij .

Covariances Between Crosses

If lines are unrelated, the progeny resulting from line crosses are non-inbred and consequently the covariance matrices for the different genetic components can be computed for
the progeny. Then one can calculate BLUP for these individual animals by the method
described in Chapter 29. With animals as contrasted to plants it would seem wise to
include a maternal influence of line of dam in the model as described below. Now in order
to reduce computational labor we shall make some simplifying assumptions as follows.
1

1. All members of all lines have inbreeding coefficient = f .


2. The lines are large enough that two random individuals from the same line are unrelated except for the fact that they are members of the same line.
Consequently, the A matrix for members of the same line is

1+f

2f
..

2f

1+f

From this result we can calculate the covariance between any random pair of individuals
from the same cross or a random individual of one cross with a random individual of
another cross. We illustrate first with single crosses. Consider line cross, 1 2, line 1
being used as the sire line. Two random progeny pedigrees can be visualized as
1a

1b
&

&
pa

pb

2a

2b

Therefore
a1a,1b = a2a,2b = 2f.
apa,pb = .25(2f + 2f ) = f.
dpa,pb = .25[2f (2f ) + 0(0)] = f 2 .
Then the genetic covariance between 2 random members of any single cross is equal to
the genetic variance of single cross means
2
2
+ f 3 ad
+ etc.
= f a2 + f 2 d2 + f 2 aa

Note that if f = 1, this simplifies to the total genetic variance of individuals in the
population from which the lines were derived.
Next consider the covariance between crosses with one common parental line, say 1
2 with 1 3.
1a

1b
&

&
pa

pb

%
2a

%
3b

As before, a1a,1b = 2f, but all other relationships among parental pairs are zero. Then
apa,pb = .25(2f ) = .5f.
dpa,pb = 0.
2
+ ..., etc.
Covariance = .5f a2 + .25f 2 aa
Next we consider 3 way crosses. Represent 2 random members of a 3 way cross (1
2) 3 by
1a

1b
&

&
xa

xb
&

2a

%
pa

&

2b

pb

3a

3b

Non-zero additive relationships are


(1a, 1b) = (2a, 2b) = (3a, 3b) = 2f, and
(xa, xb) = f,
(pa, pb) = .25(f + 2f ) = .75f,
and the dominance relationship is
(pa, pb) = .25[f (2f ) + 0(0)] = .5f 2 .
Thus the genetic variance of 3 way crosses is
3
1
3
9 2 2
2
f a2 + f 2 d2 + f 3 ad
f aa + . . . etc.
+
4
2
8
16
The covariance between a single cross and a 3 way cross depends upon the way the
crosses are made.
For a (1 2) 3 with 1 2 is f /2, and d is 0.
For a (1 2) 3 with 1 3 is .75f , and d is .5f 2 .
The variance of 4 way crosses is .5 f a2 + .25 f 2 d2 + . . . etc. The variance of top
crosses with an inbred line as a parent is .5 f a2 + (0)d2 + etc.
If we know the magnitude of the various components of genetic variance, we can
derive the variance of any line cross or the covariance between any pair of line crosses.
Then these can be used to set up mixed model equations. One must be alert to the
possibility that some of the variance-covariance matrices of genetic components may be
singular.
3

Reciprocal Crosses Assumed Equal

This section is concerned with a model in which the cross, line i line j, is assumed the
same as the cross, line j line i. The model is
0

yijk = xijk + cij + eijk .


Var(c) has this form
V ar(cij ) =
=
Cov(cij , cij 0 ) =
Cov(cij , ci0 j 0 ) =

2
f a2 + f 2 d2 + f 3 ad
+ etc.
Cov(cij , cji ).
2
Cov(cij , cji0 ) = .5 f a2 + .25 f 2 aa
+ . . . etc.
0.

We illustrate BLUP with single crosses among 4 lines with f = .6, a2 = .4, d2 = .3,
e2 = 1. All other genetic covariances are ignored. = . The number of observations
per cross and y ij , are

X
4
4
2

nij
5 3
X 6
2 X
3 9

2 X
3 5
5 6
X 5

y ij.
6 4
X 3
7 X
6 4

7
8
3
X

X denotes no observation. The OLS equations are in (30.1). Note that aij is combined
with aji to form the variable aij and similarly for d.

48 9 7 4

9 0 0

7 0

8
0
0
0
8

6 14 9 7 4 8 6 14

0 0 9 0 0 0 0 0 a12

0 0 0 7 0 0 0 0
a13

0 0 0 0 4 0 0 0
a14

0 0 0 0 0 8 0 0
a
23
a
6 0 0 0 0 0 6 0
24

14 0 0 0 0 0 14
a34

9 0 0 0 0 0
d12

7 0 0 0 0
d13

4 0 0 0
d14

8 0 0
d23

6 0 d24
14
d34

235
50
36
24
32
42
51
50
36
24
32
42
51

(1)

.24 .12 .12 .12 .12


0

.24 .12 .12


0 .12

.24
0
.12
.12
, V ar(d) = .108 I.

V ar(a) =
.24 .12 .12

.24 .12
.24
V ar(a) is singular. Consequently we pre-multiply equation (30.1) by

1
0
0

0 V ar(a) 0
0
0
I
and add

0 0
0

0
0 I

0 0 [V ar(d)]1
to the resulting coefficient matrix. The solution to these equations is

= 5.1100,
0 = (.5528, .4229, .4702, .4702, .4229, .5528),
a
0 = (.0528, .1962, .1266, .2965, .5769, .5504).
d
Note that
is

d = 0. Now the predicted future progeny average of the ij th and jith cross
+ a
ij + dij ,

where is the fixed part of the model for future progeny.


If we want to predict the future progeny mean of a cross between i k or between

, d
k i, where k is not in the sample, we can do this by selection index methods using a
as the data with variances and covariances applying to a + d rather than a. See Section
5.9. For example the prediction of the 1 5 cross is

(.12 .12 .12 0 0

0)

.348

.12
.348

.12
.12
.348

.12
.12
0
.348

.12
0
.12
.12
.348

0
.12
.12
.12
.12
.348

(
a + d).

(2)

If we were interested only in prediction of crosses among the lines 1 2, 3, 4, we could


jointly. Then there would be only 7
+d
reduce the mixed model equations to solve for a
equations. The 6 6 matrix of (30.2) would be G1 to add to the lower 6 6 submatrix
of the least squares coefficient matrix.
5

Reciprocal Crosses With Maternal Effects

In most animal breeding models one would assume that because of maternal effects the
ij th cross would be different from the jith . Now the genetic model for maternal effects
involves the genetic merit with respect to maternal of the female line in a single cross.
This complicates statements of the variances and covariances contributed by different
genetic components since the lines are inbred. The statement of a2 is possible but not
the others. The contribution of a2 is
Covariance between 2 progeny of the same cross

= 2f a2 ,

Covariance between progeny of i j with k j

= .5f a2 ,

where the second subscript denotes the female line. Consequently if we ignore other
2
. We illustrate
components, we need only to add mj to the model with V ar(m) = Im
with the same data as in Section 30.3 with V ar(m) = .5 I. The OLS equations now are
in (30.3). Now we pre-multiply these equations by

1
0
0 V ar(a)
0
0
0
0

0
0
I
0

0
0
0
I

Then add to the resulting coefficient matrix

0
0
0
0

0
0
0
I
0
0
1
0 [V ar(d)]
0
0
0
[V ar(m)]1

The resulting solution is

= 5.1999,
0 = (.2988, .2413, .3217, .3217, .2413, .2988),
a
0 = (.1737, .2307, .1136, .1759, .4479, .4426),
d
and
0 = (.0560, .6920, .8954, .1475).
m

48 9 7 4

9 0 0

7 0

8
0
0
0
8

6 14 9 7 4 8 6 14 10 10 18 10

0 0 9 0 0 0 0 0 4 5 0 0

0 0 0 7 0 0 0 0 4 0 3 0

0 0 0 0 4 0 0 0 2 0 0 2

0 0 0 0 0 8 0 0 0 2 6 0

6 0 0 0 0 0 6 0 0 3 0 3

14 0 0 0 0 0 14 0 0 9 5

9 0 0 0 0 0 4 5 0 0
a

7 0 0 0 0 4 0 3 0

d
4 0 0 0 2 0 0 2

m
8 0 0 0 2 6 0

6 0 0 3 0 3

14 0 0 9 5

10 0 0 0

10 0 0

18 0

10

= (235, 50, 36, 24, 32, 42, 51, 50, 36, 24, 32, 42, 51, 54, 62, 66, 53)0 .

(3)

Single Crosses As The Maternal Parent

If single crosses are used as the maternal parent in crossing, we can utilize various components of genetic variation with respect to maternal effects, for then the maternal parents
are non-inbred.

Breed Crosses

If one set of breeds is used as males and a second different set is used as females in a
breed cross, the problem is the same as for any two way fixed cross-classified design with
interaction and possible missing subclasses. If there is no missing subclass, the weighted
squares of means analysis would seem appropriate, but with small numbers of progeny
per cross, y ij may not be the optimum criterion for choosing the best cross. Rather, we
might choose to treat the interaction vector as a pseudo-random variable and proceed
to a biased estimation that might well have smaller mean squared error than the y ij . If
subclasses are missing, this biased procedure enables finding a biased estimator of such
crosses.

Same Breeds Used As Sires And Dams

If the same breeds are used as sires and as dams and with progeny of some or all of the
pure breeds included in the design, the analysis can be more complicated. Again one
possibility is to evaluate a cross or pure line simply by the subclass mean. However,
most breeders have attempted a more complicated analysis involving, for example, the
following model for ij the true mean of the cross between the ith sire breed and the j th
dam breed.
ij = + si + dj + ij + p if i = j
= + si + dj + ij if i 6= j.
From the standpoint of ranking crosses by BLUE, this model is of no particular value,
for even with filled subclasses the rank of the coefficient matrix is only b2 , where b is the
number of breeds. A solution to the OLS equations is
o = so = do = po = 0
ij = y ij .
Thus BLUE of a breed cross is simply y ij , provided nij > 0. The extended model provides
no estimate of a missing cross since that is not estimable. In contrast, if one is prepared
to use biased estimation, a variety of estimates of missing crosses can be derived, and
these same biased estimators may, in fact, be better estimators of filled subclasses than
y ij . Let us restrict ourselves to estimators of ij that have expectation, + si + dj + p
+ linear function of if i = j, or + si + dj + linear function of if i 6= j. Assume that
the ii are different from the ij (i 6= j). Accordingly, let us assume for convenience that
b
X

ij = 0 for i = 1, . . . , b,

j=1
b
X
i=1
b
X

ij = 0 for j = 1, . . . , b, and
ii = 0.

i=1

Next permute all labelling of breeds and compute the average squares and products of
the ij . These have the following form:
Av.(ii )2 = d.
Av.(ij )2 = c.
Av.(ii jj ) = d/(b 1).
8

Av.(ij ik ) = Av.(ij kj ) =

d c(b 1)
.
(b 1)(b 2)

d/(b 1).
d/(b 1).
r.
2d/(b 1)(b 2).
d r(b 1)
Av.(ij ki ) =
.
(b 1)(b 2)
(c + r)(b 1) 4d
Av.(ij kl ) =
.
(b 1)(b 2)(b 3)
Av.(ij jk ) = Av.(ij ki ).
Av.(ii ij )
Av.(ii ji )
Av.(ij ji )
Av.(ii jk )

=
=
=
=

These squares and products comprise a singular P matrix which could then be used in
pseudo-mixed model equations. This would, of course, require estimating d, c, r from the
data. Solving the resulting mixed model type equations,

ii = o + soi + doi + ii + po ,

ij = o + soi + doi + ij ,
when i 6= j.
A simpler method is to pretend that the model for ij is
ij = + si + dj + ij + r(i,j) ,
when i 6= j, and
ii = + si + dj + ii + p.
r has b(b 1)/2 elements and (ij) denotes i < j. Thus the element of r for ij is the same
as for ji . Then partition into the ii elements and the ij elements and pretend that
and r are random variables with

11

V ar 22
= I1 , V ar
..
.

12

13 = I22 , V ar(r) = I32 .


..
.

The covariances between these three vectors are all null. Then set up and solve the mixed
model equations. With proper choices of values of 12 , 22 , 32 relative to b, d, c, r the
estimates of the breed crosses are identical to the previous method using singular P. The
latter method is easier to compute and it is also much easier to estimate 12 , 22 , 32 than
the parameters of P. For example, we could use Method 3 by computing appropriate
reductions and equating to their expectations.
We illustrate these two methods with a 4 breed cross. The nij and yij. were as follows.
9

5
4
3
4

nij
2 3
2 6
5 2
2 3

1
7
8
4

6
8
9
2

yij.
3 2
3 5
4 7
6 8

7
6
3
6

Assume that P is the following matrix, (30.4) . . . (30.6). V ar(e) = I. Then we premultiply
the OLS equations by
!
I 0
0 P
and add I to the last 16 diagonal coefficients.
Upper 8 8

1.8

.6
.6
.6 .6 .6
.6
.6
4.48 1.94 1.94
.88 .6 .14 .14

4.48 1.94 .14


.6 1.94
1.48

4.48 .14
.6
1.48 1.94

4.48 .6 1.94 1.94

1.8
.6
.6

4.48 1.94
4.48

(4)

Upper right 8 8 and (lower left 8 8)

.6
.6 .6
.6
.6
.6
.6 .6
.14 1.94
.6
1.48 .14 1.94
1.48
.6

.88 .14 .6 .14 .14


1.48 1.94
.6

.14
1.48
.6 1.94
.88 .14 .14 .6

1.94 .14
.6
1.48 1.94 .14
1.48
.6

.6
.6 .6
.6
.6
.6
.6 .6

.14
.88 .6 .14
1.48 .14 1.94
.6
1.48 .14
.6 1.94 .14
.88 .14 .6

(5)

Lower right 8 8

4.48 1.94 .6 1.94 1.94


1.48 .14
.6

4.48 .6 1.94
1.48 1.94 .14
.6

1.8
.6
.6
.6
.6
.6

4.48 .14 .14


.88 .6

4.48 1.94 1.94 .6

4.48 1.94 .6

4.48 .6

1.8
10

(6)

A solution to these equations is


o
so
do
po

=
=
=
=

0,
(2.923, 1.713, 2.311, 2.329)0 ,
(.652, .636, .423, 0)0 ,
.007.

in tabular form is

1.035 .754 1.749


3.538
.898
.377 .434 .841
1.286 .836
1.453 1.902
1.149 1.214
.729 .795

The resulting
ij are

1.243 1.533 .752 6.462


1.959 1.461 .857 .872
2.945 .839 3.349 .409
.528 2.908 2.635 1.541

ij can be
Note that these
ij 6= y ij but are not markedly different from them. The same
obtained by using
V ar( ii ) = 2.88 I,
V ar( ij ) = 7.2 I,
V ar(r) = 2.64 I.
The solution to these mixed model equations is different from before, but the resulting

ij are identical. Ordinarily one would not accept a negative variance. The reason for
this in our example was a bad choice of the parameters of P. The OLS coefficient matrix
for this solution is in (30.7) . . . (30.9). The right hand sides are (18, 22, 23, 22, 25, 16,
22, 22, 6, 3, 2, 7, 8, 3, 5, 6, 9, 4, 7, 3, 2, 6, 8, 6, 11, 11, 9, 9, 12, 11). o and do4 are
deleted giving a solution of 0 for them. The OLS equations for the preceding method are
the same as these except the last 6 equations and unknowns are deleted. The solution is
o
so
do
po
ro

=
=
=
=
=

0,
(1.460, 1.379, 2.838, 1.058)0 ,
(.844, .301, 1.375, 0)0 ,
.007,
(.253, .239, 1.125, .888, .220, .471)0 .

in tabular form =

.621 .481 1.844


3.877
1.172 .226 1.009 .727
1.191 1.412 .872 1.958
.810
1.328
.673
.477

This solution gives the same result for


ij as before.
11

Upper left 15 15

11

0
19

0
0
18

0
0
0
13

5
4
3
4
16

2
2
5
2
0
11

3
6
2
3
0
0
14

5
2
2
4
5
2
2
13

5
0
0
0
5
0
0
5
5

2
0
0
0
0
2
0
0
0
2

3
0
0
0
0
0
3
0
0
0
3

1
0
0
0
0
0
0
0
0
0
0
1

0
4
0
0
4
0
0
0
0
0
0
0
4

0
2
0
0
0
2
0
2
0
0
0
0
0
2

0
6
0
0
0
0
6
0
0
0
0
0
0
0
6

(7)

Upper right 15 15 and (lower left 15 15)

0
7
0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
3
0
3
0
0
0
0
0
0
0
0
0
0

0
0
5
0
0
5
0
0
0
0
0
0
0
0
0

0
0
2
0
0
0
2
2
0
0
0
0
0
0
0

0
0
8
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
4
4
0
0
0
0
0
0
0
0
0
0

0
0
0
2
0
2
0
0
0
0
0
0
0
0
0

0
0
0
3
0
0
3
0
0
0
0
0
0
0
0

12

0
0
0
4
0
0
0
4
0
0
0
0
0
0
0

2
4
0
0
4
2
0
0
0
2
0
0
4
0
0

3
0
3
0
3
0
3
0
0
0
3
0
0
0
0

1
0
0
4
4
0
0
0
0
0
0
1
0
0
0

0
6
5
0
0
5
6
0
0
0
0
0
0
0
6

0
7
0
2
0
2
0
0
0
0
0
0
0
0
0

0
0
8
3
0
0
3
0
0
0
0
0
0
0
0

(8)

Lower right 15 15
7 0 0 0

3 0 0

5 0

0
0
0
0
8

0
0
0
0
0
4

0
0
0
0
0
0
2

0 0
0 0
0 0
0 0
0 0
0 0
0 0
3 0
dg (4

0
0
0
0
0
0
0
0
6

0
3
0
0
0
0
0
0
6

0 0
0 0
0 5
0 0
0 0
4 0
0 0
0 0
5 11

7
0
0
0
0
0
0
0
0
8
0
0
2
0
0
3
9 11)

(9)

The method just preceding is convenient for missing subclasses. In that case ij
associated with nij = 0 are included in the mixed model equations.

13

Chapter 31
Maternal Effects
C. R. Henderson
1984 - Guelph

Many traits are influenced by the environment contributed by the dam. This is
particularly true for traits measured early in life and for species in which the dam nurses
the young for several weeks or months. Examples are 3 month weights of pigs, 180 day
weights of beef calves, and weaning weights of lambs. In fact, genetic merit for maternal
ability can be an important trait for which to select. This chapter is concerned with some
models for maternal effects and with BLUP of them.

Model For Maternal Effects

Maternal effects can be estimated only through the progeny performance of a female or the
progeny performance of a related female when direct and maternal effects are uncorrelated.
If they are correlated, maternal effects can be evaluated whenever direct can be. Because
the maternal ability is actually a phenotypic manifestation, it can be regarded as the sum
of a genetic effect and an environmental effect. The genetic effect can be partitioned at
least conceptually into additive, dominance, additive additive, etc. components. The
environmental part can be partitioned, as is often done for lactation yield in dairy cows,
into temporary and permanent environmental effects. Some workers have suggested that
the permanent effects can be attributed in part to the maternal contribution of the dam
of the dam whose maternal effects are under consideration.
Obviously if one is to evaluate individuals for maternal abilities, estimates of the
underlying variances and covariances are needed. This is a difficult problem in part due
to much confounding between maternal and direct genetic effects. BLUP solutions are
probably quite sensitive to errors in estimates of the parameters used in the prediction
equations. We will illustrate these principles with some examples.

Pedigrees Used In Example


Individual
No.
1
2
3
4
5
6
7
8

Sex
Male
Female
Female
Female
Male
Male
Male
Male

Sire
Unknown
Unknown
1
1
Unknown
Unknown
6
5

Dam
Record
Unknown
6
Unknown
9
2
4
2
7
Unknown
8
Unknown
3
3
6
4
8

This gives an A matrix as follows:

1 0 .5 .5

1 .5 .5

1 .5

0
0
0
0
1

0 .25 .25
0 .25 .25
0 .5 .25
0 .25
.5
0
0
.5
1 .5
0
1 .125
1

The corresponding dominance relationship matrix is a matrix with 1s in the diagonals,


and the only non-zero off-diagonal element is that for d34 = .25.
For our first example we assume a model with both additive direct and additive
maternal effects. We assume that e2 = 1, a2 (direct) = .5, a2 (maternal) = .4, covariance
direct with maternal = .2. We assume X = 1. In all of our examples we have
assumed that the permanent environmental contribution to maternal effects is negligible.
If one does not wish to make this assumption, a vector of such effects can be included.
Its variance is Ip2 , and is assumed to be uncorrelated with any other variables. Then
permanent environmental effects can be predicted only for those animals with recorded
progeny. Then the incidence matrix excluding p is

1
1
1
1
1
1
1
1

1
0
0
0
0
0
0
0

0
1
0
0
0
0
0
0

0
0
1
0
0
0
0
0

0
0
0
1
0
0
0
0

0
0
0
0
1
0
0
0

0
0
0
0
0
1
0
0

0
0
0
0
0
0
1
0

0
0
0
0
0
0
0
1
2

0
0
0
0
0
0
0
0

0
0
1
1
0
0
0
0

0
0
0
0
0
0
1
0

0
0
0
0
0
0
0
1

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0

(1)

Cols. 2-9 represent a and cols 10-17 represent m. This gives the following OLS equations.

8 1 1 1

1 0 0

1 0

1
0
0
0
1

1
0
0
0
0
1

1
0
0
0
0
0
1

1
0
0
0
0
0
0
1

1
0
0
0
0
0
0
0
1

0
0
0
0
0
0
0
0
0
0

G=

2
0
0
1
1
0
0
0
0
0
2

1
0
0
0
0
0
0
1
0
0
0
1

1
0
0
0
0
0
0
0
1
0
0
0
1

0
0
0
0
0
0
0
0
0
0
0
0
0
0

.5A .2A
.2A .4A

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0


a
=

51
6
9
4
7
8
3
6
8
0
11
6
8
0
0
0
0

(2)

Adding the inverse of G to the lower 16 16 submatrix of (31.2) gives the mixed model
equations, the solution to which is

= 6.386,
= (.241, .541, .269, .400, .658, 1.072, .585, .709)0 ,
a
= (.074, .136, .144, .184, .263, .429, .252, .296)0 .
m
In contrast, if covariance (a, m0 ) = 0, the maternal predictions of 5 and 6 are 0. With
2
2
a2 = .5, m
= .4, am
= 0 the solution is

= 6.409,
= (.280, .720, .214, .440, .659, 1.099, .602, .742)0 ,
a
= (.198, .344, .029, .081, 0, 0, .014, .040)0 .
m
Note now that 5 and 6 cannot be evaluated for m since they are males and have no female
relatives with progeny.

Additive And Dominance Maternal And Direct Effects

If we assume that additive and dominance affect both direct and maternal merit, the
incidence matrix of (31.1) is augmented on the right by the last 16 columns of (31.1)
giving an 8 33 matrix. Assume the same additive direct and maternal parameters as
before and let the dominance parameters be .3 for direct variance, .2 for maternal, and .1
for their covariance. Then

G=

.5A .2A
0
0
.2A .4A
0
0
0
0 .3D .1D
0
0 .1D .2D

The solution is

a direct
a maternal
d direct
d maternal

=
=
=
=
=

6.405,
(.210, .478, .217, .350, .545, .904, .503, .588)0 ,
(.043, .083, .123, .156, .218, .362, .220, .243)0 ,
(.045, .392, .419, .049, .242, .577, .069, .169)0 ,
(.015, .078, .078, .119, .081, .192, .023, .056)0 .

Quadratics to compute to estimate variances and covariances by MIVQUE would be


(direct)0 A1 a
(direct),
a
0 1
(direct) A a
(maternal),
a
(maternal)0 A1 a
(maternal),
a
0
1

d(direct)
D d(direct),
0
1

d(direct)
D d(maternal),
0 1

d(maternal) D d(maternal),
0 e
.
e
Of course the data of our example would be quite inadequate to estimate these variances
and covariances.

Chapter 32
Three Way Mixed Model
C. R. Henderson
1984 - Guelph

Some of the principles of preceding chapters are illustrated in this chapter using an
unbalanced 3 way mixed model. The method used here is one of several alternatives that
appeals to me at this time. However, I would make no claims that it is best.

The Example

Suppose we have a 3 way classification with factors denoted by A, B, C. The levels of A


are random and those of B and C are fixed. Accordingly a traditional mixed model would
contain factors and interactions as follows, a, b, c, ab, ac, bc, abc with b, c, and bc fixed, and
the others random. The subclass numbers are as follows.

A 11
1 5
2 1
3 0

12
2
2
4

13
3
4
8

BC subclasses
21 22 23
6 0 3
0 5 2
2 3 5

31
2
3
7

32
5
6
0

33
0
0
0

The associated ABC subclass means are

3 5 2 4 8 9 2

5 6 7 8 5 2 6 .
9 8 4 3 7 5
Because there are no observations on bc33 , estimates and tests of b, c, and b c that mimic
the filled subclass case cannot be accomplished using unbiased estimators. Accordingly,
we might use some prior on squares and products of bcjk and obtain biased estimators.
2
2
Let us assume the following prior values, e2 /a2 = 2, e2 /ab
= 3, e2 /ac
= 4, e2 /pseudo
2
2
bc
= 6, e2 /abc
= 5.

Estimation And Prediction

The OLS equations that include missing observations have 63 unknowns as follows
a
b
c
ab

13
ac 19 27
46
bc 28 36
79
abc 37 63
10 18

W is a 20 63 matrix with 1s in the following columns of the 20 rows. The other


elements are 0.
Levels
a b
1 1
1 1
1 1
1 2
1 2
1 3
1 3
2 1
2 1
2 1
2 2
2 2
2 3
2 3
3 1
3 1
3 2
3 2
3 2
3 3

of
c
1
2
3
1
3
1
2
1
2
3
2
3
1
2
2
3
1
2
3
1

Cols. with 1
1,4,7,10,19,28,37
1,4,8,10,20,29,38
1,4,9,10,21,30,39
1,5,7,11,19,31,40
1,5,9,11,21,33,42
1,6,7,12,19,34,43
1,6,8,12,20,35,44
2,4,7,13,22,28,46
2,4,8,13,23,29,47
2,4,9,13,24,30,48
2,5,8,14,23,32,50
2,5,9,14,24,33,51
2,6,7,15,22,34,52
2,6,8,15,23,35,53
3,4,8,16,26,29,56
3,4,9,16,27,30,57
3,5,7,17,25,31,58
3,5,8,17,26,32,59
3,5,9,17,27,33,60
3,6,7,18,25,34,61

Let N be a 20 20 diagonal matrix with filled subclass numbers in the diagonal, that is
0
N = diag(5,2,. . . ,5,7). Then the OLS coefficient matris is W NW, and the right hand
side vector is W0 Ny, where y = (3 5 . . . 7 5)0 . The right hand side vector is (107, 137,
187, 176, 150, 105, 111, 153, 167, 31, 48, 28, 45, 50, 42, 100, 52, 35, 57, 20, 30, 11, 88, 38,
43, 45, 99, 20, 58, 98, 32, 49, 69, 59, 46, 0, 15, 10, 6, 24, 0, 24, 18, 10, 0, 5, 12, 28, 0, 40,
10, 6, 36, 0, 0, 36, 64, 8, 9, 35, 35, 0, 0).
Now we add the following diagonal matrix to the coefficient matrix, (2, 2, 2, 0, 0, 0,
0, 0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5,
2

5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5). A resulting mixed model solution is


(.54801, .10555, .44246)0 .
(6.45206, 6.13224, 5.77884)0 .
(1.64229, .67094, 0)0 .
(1.21520, .46354, .38632, .14669, .29571, .37204,
1.06850, .75924, .01428)0 .
ac = (.60807, .70385, .17822, .63358, .85039, .16403,
.02552, .14653, .34225)0 .
bc = (.12431, .47539, .35108, .30013, .05095, .35108,
.42444, .42444, 0)0 .
abc = (.26516, .34587, .80984, .38914, 0, .66726, 1.14075,
.90896, 0, .11598, .38832, .36036, 0, .66901, .49158,
.62285, .39963, 0, 0, .61292, .02819, .02898, .73014,
.24561, .00857, 0, 0)0 .
a
b
c
ab

=
=
=
=

From these results the biased prediction of subclass means are in (32.1).
A
1
2
3

C1
3.265
4.420
6.222

B1
C2
4.135
6.971
8.234

C3
C1
3.350 4.324
6.550 3.957
7.982 3.928

B2
C2
4.622
7.331
4.217

C3
C1
6.888 6.148
6.229 3.038
6.754 5.006

B3
C2
2.909
5.667
4.965

C3
5.439
5.348
6.549

(1)

Note that these are different from the y ijk for filled subclasses, the latter being BLUE.
Also subclass means are predicted for those cases with no observations.

Tests Of Hypotheses

Suppose we wish to test the following hypotheses regarding b, c, and bc. Let
jk = + bj + ck + bcjk .
We test j. are equal, .k are equal, and that all jk - j. - .k + .. are equal. Of course
these functions are not estimable if any jk subclass is missing as is true in our example.
Consequently we must resort to biased estimation and accompanying approximate tests
based on estimated MSE rather than sampling variances. We assume that our priors are
the correct values and proceed for the first test.
K0 =

1 1 1 0 0 0 1 1 1 1 1 1 0 0
0 0 0 1 1 1 1 1 1 0 0 0 1 1
0 1 1 1 1 1 1 0 0 0 1 1 1
1 1 1 1 0 0 0 1 1 1 1 1 1
3

where is the vector of ijk ordered k in j in i. From (32.1) the estimate of these functions
is (6.05893, 3.18058). To find the mean squared errors of this function we first compute
0
the mean squared errors of the ijk . This is WCW P, where W is the matrix relating
ijk subclass means to the 63 elements of our model. C is a g-inverse of the mixed model
coefficient matrix. Then the mean squared error of K0 is
K0 PK =
Then

17.49718 13.13739
16.92104

o K0 (K0 PK)1 K0 o = 2.364,


and this is distributed approximately as 2 with 2 d.f. under the null hypothesis.
To test C we use
1 0 1 1 0 1 1 0 1 1 0 1 1 0
0 1 1 0 1 1 0 1 1 0 1 1 0 1

K0 =

1 1 0 1 1 0 1 1 0 1 1 0 1
1 0 1 1 0 1 1 0 1 1 0 1 1

K0 o = (14.78060, 6.03849)0 ,
with
MSE =

17.25559 10.00658
14.13424

This gives the test criterion = 13.431, distributed approximately as 2 with 2 d.f. under
the null hypothesis.
For B C interaction we use

K0 =

1
0
0
0

0 1 0 0
0 1
0 1 1
1 1 0 0
0
0 1 1 0
0
0 1 0 1 1
0 1 0
0
0 0 1 1
0 1 1 0

0 1
0 1 1
0
0 1 1 0
1 1
0 1 0
1
0 1 1 0

0 1 0 0
1 1 0 0
0
0 1 0
0
0 0 1

0 1 0 0
0 1
0 1
1 1 0 0
0
0 1 1
0
0 1 0 1 1
0 1
0
0 0 1 1
0 1 1

This gives
K0 o = (.83026, 5.25381, 4.51772, .09417)0 ,
with
6.37074 4.31788 4.56453 3.64685

6.09614 3.77847 4.70751

MSE =

6.32592 4.23108
6.31457

The test criterion is 21.044 distributed approximately as 2 with 4 d.f. under the null
hypothesis.
Note that in these examples of hypothesis testing the priors used were quite arbitrary.
The tests are of little value unless one has good prior estimates. This of course is true for
any unbalanced mixed model design.

REML Estimation By EM Method

We next illustrate one round of estimation of variances by the EM algorithm. We treat


2
bc
as a variance. The first round of estimation is obtained from the mixed model solution
of Section 32.2. For e2 we compute
[y0 y soln. vector (r.h.s. vector)]/[n rank (X)].
y0 y = 2802.
Red = 2674.47.

e2 = (2802 2674.47)/(78 5) = 1.747.

a2

.28568 .10673 .10759

0
a
+ tr 1.747
.28645 .10683
= a

/3 = .669
.28558

.2346
..

0
2
a
+ tr 1.747

b
b

ab
=
a

ac

bc

abc

.26826

The solution for four rounds follows.

.18846
...
.16607

.16505

.. . .
0

bc a
bc + tr 1.747
= a
.
.

/9 = .847.

...

.14138
0

.. . .

= bc bc + tr 1.747
.
.

.19027
0

.. . .

c a
c + tr 1.747
= a
.
.

...

/9 = .580.

...
.20000

/9 = .357.

/127 = .534.

e2
a2
2
ab
2
ac
2
bc
2
abc

1
1.747
.169
.847
.580
.357
.534

2
3
4
1.470 1.185 .915
.468 .330 .231
.999 1.090 1.102
.632 .638 .587
.370 .362 .327
.743 1.062 1.506

2
It appears that
e2 and
abc
may be highly confounded, and convergence will be slow. Note
2
2
that
e +
abc does not change much.

Chapter 33
Selection When Variances are Unequal
C. R. Henderson
1984 - Guelph

The mixed model equations for BLUP are well adapted to deal with variances that
differ from one subpopulation to another. These unequal variances can apply to either e
or to u or a subvector of u. For example, cows are to be selected from several herds, but
the variances differ from one herd to another. Some possibilities are the following.
1. a2 , additive genetic variance, is the same in all herds but the within herd e2 differ.
2. e2 is constant from one herd to another but intra-herd a2 differ.
3. Both a2 and e2 differ from herd to herd, but a2 /e2 is constant. That is, intra-herd h2
is the same in all herds, but the phenotypic variance is different.
4. Both a2 and e2 differ among herds and so does a2 /e2 .

Sire Evaluation With Unequal Variances

As an example, AI sires are sometimes evaluated across herds using


yijk
V ar(s)
V ar(e)
Cov(a, e0 )

=
=
=
=

si + hj + eijk .
As2 ,
Ie2 ,
0.

h is fixed. Suppose, however, that we assume, probably correctly, that within herd e2
varies from herd to herd, probably related to the level of production. Suppose also that s2
is influenced by the herd. That is, in the population of sires s2 is different when sires are
used in herd 1 as compared to s2 when these same sires are used in herd 2. Suppose further
that s2 /e2 is the same for every herd. This may be a somewhat unrealistic assumption,
but it may be an adequate approximation. We can treat this as a multiple trait problem,
trait 1 being progeny values in herd 1, trait 2 being progeny values in herd 2, etc. For
purposes of illustration let us assume that all additive genetic correlations between pairs
of traits are 1. In that case if the true rankings of sires for herd 1 were known, then these
would be the true rankings in herd 2.
1

Let us order the progeny data by sire within herd. Then

R=

Iv1 0 . . . 0
0 Iv2 . . .
..
.
..
. ..
.
0 . . . . . . Ivt

where there are t herds.

G=

Aw11 Aw12 . . . Aw1t


Aw12 Aw22 . . . Aw2t
..
..
..
.
.
.
Aw1t Aw2t
Awtt

where vi /wii is the same for all i = 1, . . . , t. Further wij = (wii wjj ).5 . This is, of course,
an oversimplified model since it does not take into account season and year of freshening.
It would apply to a situation in which all data are from one year and season.
We illustrate this model with a small set of data.

Sires
1
2
3

1
5
3
0

nij
2
8
4
5

3 1
0 6
7 5
9 -

yij
2 3
12 8 9
10 12

1 .5 .5

1 .25
A=
.
1
e2 for the 3 herds is 48, 108, 192, respectively. V ar(s) for the 3 herds is

4A 6A 8A

9A 12A

.
16A

Note that 6 = [4(9)].5 , 8 = [4(16)].5 , and 12 = [9(16)].5 . Accordingly G is singular and


we need to use the method described in Chapter 5 for singular G. Now the GLS coefficient
matrix for fixed s is in (33.1) . . . (33.3). This corresponds to ordering (s11 , s21 , s31 , s12 , s22 , s32 , s13 , s23 , s3
The first subscript on s refers to sire number and the second to herd number. The right
hand side vector is (.1250, .1042, 0, .1111, .0741, .0926, 0, .0469, .0625, .2292, .2778,
.1094).
The upper diagonal element of (33.1) to (33.3) is 5/48, 5 being the number of progeny
of sire 1 in herd 1, and 48 being e2 for herd 1. The lower diagonal is 16/192. The first
element of the right hand side is 6/48, and the last is 21/192.
2

Upper left 6 6
diag(.10417, .06250, 0, .07407, .03704, .04630).

(1)

Upper right 6 6 and (lower left 6 6)

0
0
0
0
0
0

0
0
0
0
0
0

0 .10417
0
0
0 .06250
0
0
0
0
0
0
0
0
.07407 0
0
0
.03704 0
0
0
.04630 0

(2)

Lower right 6 6

0
.03646

0
0
.04687

0
0
0
.16667

0
0
0 .03646

0 .04687
.
0
0

.15741
0
.08333

(3)

Now we multiply these equations by


4A 6A 8A 0

9A 12A 0

16A 0
I3

and add 1 to each of the first 9 diagonal elements. Solving these equations the solution
is (-.0720, .0249, .0111, -.1080, .0373, .0166, -.1439, .0498, .0222, 1.4106, 1.8018, 1.2782)0 .
Note that si1 /
si2 = 2/3, si1 /
si3 = 1/2, si2 /
si3 = 3/4. These are in the proportion
.5 .5
.5
(2:3:4) which is (4 :9 :16 ). Because of this relationship we can reduce the mixed model
equations to a set involving si1 and hj by premultiplying the equations by

1.
0
0
0
0
0

0
1.
0
0
0
0

0 1.5
0
0 2.
0
0 1.5
0 0
1.
0
0 1.5 0
0
0
0
0 0
0
0
0
0 0
0
0
0
0 0

0
2.
0
0
0
0

0
0
2.
0
0
0

0
0
0
1.
0
0

0
0
0
0
1.
0

0
0
0
0
0
1.

(4)

Then the resulting coefficient matrix is post-multiplied by the transpose of matrix (33.4).

This gives equation (33.5).

15.104 4.229 4.229 3.927 5.035 2.417


3.927 15.708 2.115 3.323 3.726 2.794
3.927 2.115 15.708 1.964 4.028 3.247
0.104 0.062 0.0
0.167 0.0
0.0
0.111 0.056 0.069 0.0
0.157 0.0
0.0
0.073 0.094 0.0
0.0
0.083

s11
s21
s31
1
h
2
h
3
h

16.766
15.104
14.122
.229
.278
.109

(5)

The solution is (-.0720, .0249, .0111, 1.4016, 1.8018, 1.2782)0 . These are the same as
before.
How would one report sire predictions in a problem like this? Probably the logical
thing to do is to report them for a herd with average e2 . Then it should be pointed out
that sires are expected to differ more than this in herds with large e2 and to differ less in
herds with small e2 . A simpler method is to set up equations at once involving only si1
or any other chosen sij (j fixed). We illustrate with si1 . The W matrix for our example
with subclass means ordered sires in herds is

1
0
0 1 0
0
1
0 1 0
1.5
0
0 0 1
0 1.5
0 0 1
0
0 1.5 0 1
0
2
0 0 0
0
0
2 0 0

0
0
0
0
0
1
1

This corresponds to si2 = 1.5 si1 , and si3 = 2 si1 . Now compute the diagonal matrix
diag(5, 3, 8, 4, 5, 7, 9) [dg(48, 48, 108, 108, 108, 192, 192)]1 D.
0

Then the GLS coefficient matrix is W DW and the right hand side is W Dy, where y is
the subclass mean vector. This gives

.2708

0
.2917

s11
0 .1042 .1111
0

s21
0 .0625 .0556 .0729

s31
.2917
0 .0694 .0937

.1667
0
0 h1

.1574
0
h2
3
.0833
h

.2917
.3090
.2639
.2292
.2778
.1094

(6)

Then add (4A)1 to the upper 3 x 3 submatrix of (33.6) to obtain mixed model equations.
Remember 4A is the variance of the sires in herd 1. The solution to these equation is as
before, (-.0720, .0249, .0111, 1.4106, 1.8018, 1.2782)0 .

Cow Evaluation With Unequal Variances

Next we illustrate inter-herd joint cow and sire when herd variances are unequal. We
assume a simple model
yij = hi + aj + eij .
h is fixed, a is additive genetic merit with

V ar(a) =

Ag11 Ag12 . . . Ag1t


Ag12 Ag22 . . . Ag2t
..
..
..
.
.
.
Ag1t Ag2t
Agtt

A is the numerator relationship for all animals. There are t herds, and we treat production
as a different trait in each herd. We assume genetic correlations of 1. Therefore gij =
(gii gjj ).5 .

Iv1
0

Iv2

.
V ar(e) =
..

Ivt

First we assume a2 /e2 is the same for all herds. Therefore gii /vi is the same for all herds.
As an example suppose that we have 2 herds with cows 2, 3 making records in herd
1 and cows 4, 5 making records in herd 2. These animals are out of unrelated dams, and
the sire of 2 and 4 is 1. The records are 3, 2, 5, 6.

1 .5 0 .5

1 0 .25

1
0
A=

0
0
0
0
1

Ordering the data by cow number and the unknowns by h1 , h2 , a in herd 1, a in herd 2
the incidence matrix is

1
1
0
0

0
0
1
1

0
0
0
0

1
0
0
0

0
1
0
0

0
0
0
0

0
0
0
0

0
0
0
0

0
0
0
0

0
0
0
0

0
0
1
0

0
0
0
1

Suppose that
G=

4A 8A
8A 16A

R=

12I
0
0 48I

Then e2 /a2 = 3 in each herd, implying that h2 = .25. Note that G is singular so the
method for singular G is used. With these parameters the mixed model solution is
= (2.508, 5.468).
h
in herd 1 = (.030, .110, .127, .035, .066).
a
in herd 2 = (.061, .221, .254, .069, .133).
a
Note that a
i in herd 2 = 2 a
i in herd 1 corresponding to (16/4).5 = 2.
A simpler method is to use an incidence matrix as follows.

1
1
0
0

0
0
1
1

0
0
0
0

1
0
0
0

0
1
0
0

0
0
2
0

0
0
0
2

This corresponds to unknowns h1 , h2 , a in herd 1. Now G = 4A and R is the same as


and a
in herd 1. Then a
in herd
before. The resulting solution is the same as before for h

2 is 2 times a in herd 1.
Now suppose that G is the same as before but e2 = 12,24 respectively. Then h2 is
higher in herd 2 than in herd 1. This leads again to the a
in herd 2 being twice a
in herd
1, but the a
for cows making records in herd 2 are relatively more variable, and if we were
selecting a single cow, say for planned mating, the chance that she would come from herd
2 is increased. The actual solution in this example is
= (2.513, 5.468).
h
in herd 1 = (.011, .102, .128, .074, .106).
a
in herd 2 = twice those in herd 1.
a
The only reason we can compare cows in different herds is the use of sires across herds.
A problem with the methods of this chapter is that the individual intra-herd variances
must be estimated with limited data. It would seem, therefore, that it might be advisable to take as the estimate for an individual herd, the estimate coming from that herd
regressed toward the mean of variances of all herds, the amount of regression depending
upon the number of observations. This would imply, perhaps properly, that intra- herd
variances are a sample of some population of variances. I have not derived a method
comparable to BLUP for this case.

You might also like