Revision SB Chap 8 12 Updated 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Chapter 8

8.1 SAMPLING AND ESTIMATION


A sample statistic: a random variable whose value depends items included in the random sample
tham so mau
Some samples may represent the population well, while other samples could differ greatly from the population
(particularly if the sample size is small)
he so bien
To make inferences about a population that consider four factors:

• Sampling variation (uncontrollable).


• Population variation (uncontrollable).
• Sample size (controllable).
• Desired confidence in the estimate (controllable).

Estimators and estimates


Ham uoc luong & uoc tinh

An estimator: a statistic derived from a sample to infer the value of a population parameter
An estimate: the value of the estimator in a particular sample tham so tong the

Sample estimator of population parameters

Examples of estimators

trung binh mau


Example: Consider eight random samples of size n = 5 from a large population of GMAT scores.
Sample mean is a statistic
Sample mean used to estimate population mean is an estimator

̅̅̅̅𝟏 = 𝟓𝟎𝟒. 𝟎 is an estimate


𝑿

Sampling error sai so chuan


Sampling error: the difference between an estimate and the corresponding population parameter.
Example for population mean
̅− 𝝁
Sampling error = 𝑿

Properties of Estimators
BIAS
The bias: the difference between the expected value of the estimator and the true parameter. Example
for the average value

̅)
Bias = 𝑬(𝒙 −𝝁
An unbiased estimator neither overstates nor understates the true parameter on average. Example of
unbiased estimator 𝑬(𝒙
̅) = 𝝁
̅) and sample proportion (p) are unbiased estimators of μ and π
The sample mean (𝒙

Sampling error is random whereas bias is systematic

EFFICIENCY
Efficiency refers to the variance of the estimator’s sampling distribution
Smaller variance means more efficient estimator. We prefer the minimum variance estimator - MVUE

̅ and s2 are minimum variance estimators of μ and σ2


𝒙

CONSISTENCY
Consistent estimator converges toward the parameter being estimated as the sample size increases
The variances of three estimators 𝒙
̅, s and p diminish as n increases, so all are consistent estimators.

8.2 CENTRAL LIMIT THEOREM dinh ly


Sampling distribution of an estimator: the probability distribution of all possible values the statistic may
assume when a random sample of size n is taken.

The sample mean is an unbiased estimator for μ: 𝑬(𝒙


̅) = 𝝁 (the expected value of mean)
Sampling error of the sample mean - standard error of the mean: described by its standard deviation:
𝝈
𝝈𝑿̅ =
√𝒏

Central Limit Theorem for a Mean


If a random sample of size n is drawn from a population: mean μ and standard deviation σ

̅ approaches a normal distribution with mean μ and standard


The distribution of the sample mean 𝒙
𝝈
deviation 𝝈𝑿̅ = as the sample size increases.
√𝒏
Three important facts about the sample mean:

1. If the population is normal, the sample mean has a normal distribution centered at μ, with a standard
𝝈
error equal to 𝝈𝑿̅ =
√𝒏

2. As sample size n increases, the distribution of sample means converges to the population mean μ
𝝈
(i.e., the standard error of the mean 𝝈𝑿̅ = gets smaller).
√𝒏
3. Even if your population is not normal, by the Central Limit Theorem, if the sample size is large
enough, the sample means will have approximately a normal distribution.

Applying the Central Limit Theorem


UNIFORM POPULATION
The RULE OF THUMB that n ≥ 30 is required to ensure a normal distribution for the sample mean, but
actually a much smaller n will suffice if the population is symmetric.
The CLT predicts:

• The distribution of sample means drawn from the population will be normal
• The standard error of the sample mean 𝛔𝐗̅ will decrease as sample size increases

SKEWED POPULATION
The CLT predicts

• The distribution of sample means drawn from any population will approach normality
• The standard error of the sample mean 𝛔𝐗̅ will diminish as sample size increases.
In highly skewed populations, even n ≥ 30 will not ensure normality, though it is not a bad rule

In severely skewed populations, the mean is a poor measure of center to begin with due to outliers
Histograms of the actual means of many samples drawn from this uniform population

Range of Sample Means


The CLT permits: to define an interval which the sample means are expected to fall in.
As long as the sample size n is large enough, we can use the normal distribution regardless of the population shape
(or any n if the population is normal to begin with)

We use the familiar z-values for the standard normal distribution. If we know μ and σ, the CLT allows
us to predict the range of sample means for samples of size n:
8.3 SAMPLE SIZE AND STANDARD ERROR
𝝈
The key is the standard error of the mean: 𝝈𝑿̅ = The standard error decreases as n increases
√𝒏
To halve (÷2) the standard error, you must quadruple (x4) the sample size

𝝈
You can make the interval 𝝈𝑿̅ = ̿ of sample
as small as you want by increasing n. The mean 𝑿
√𝒏
̅ converges to the true population mean μ as n increases.
means 𝑿

8.4 CONFIDENCE INTERVAL FOR A MEAN (μ) WITH KNOWN σ


What Is a Confidence Interval?
A sample mean 𝑿̅ calculated from a random sample x1, x2, . . . , xn is a point estimate of the unknown
population mean μ

Construct a confidence interval for the unknown mean μ by adding and subtracting a margin of error
̅ , the mean of our random sample
from 𝑿

The confidence level for this interval is expressed as a percentage such as 90, 95, or 99 percent

Confidence level using z


Choosing a Confidence Level
In order to gain confidence, we must accept a wider range of possible values for μ. Greater confidence
implies loss of precision (i.e., a greater margin of error)
Common confidence level

Interpretation
If you took 100 random samples from the same population and used exactly this procedure to construct
100 confidence intervals using a 95 percent confidence level
➔ approximately 95 (95%) of the intervals would contain the true mean μ, while approximately 5 (5%)
intervals would not

When Can We Assume Normality?


• If σ is known and the population is normal, then we can safely use formula 8.6 to construct the
confidence interval for μ.
• If σ is known but we do not know whether the population is normal
̅ (by the CLT) as long as the
➔ Rule of thumb: n ≥ 30 is sufficient to assume a normal distribution for 𝑿
population is reasonably symmetric and has no outliers.

8.5 CONFIDENCE INTERVAL FOR A MEAN (μ) WITH UNKNOWN σ

Student’s t Distribution
When σ is unknown → the formula for a confidence interval resembles the formula for known σ except
that t replaces z and s replaces σ.

The confidence intervals will be wider (other things being the same) - tα/2 is always greater than zα/2.
Degrees of Freedom
Knowing the sample size allows us to calculate a parameter called degrees of freedom - d.f. - used to
determine the value of the t statistic used in the confidence interval formula.

d.f. = n - 1 (degrees of freedom for a confidence interval for μ)


The degrees of freedom tell us how many observations we used to calculate s - the sample standard
deviation, less the number of intermediate estimates we used in our calculation
Number of observations that are free to vary after sample mean has been calculated

Comparison of z and t
As degrees of freedom increase, the t-values approach the familiar normal z-values.

Outliers and Messy Data


The t distribution assumes a normal population.
Confidence intervals using Student’s t are reliable as long as the population is not badly skewed and if
the sample size is not too small

Using Appendix D
Beyond d.f. 5 50, Appendix D shows d.f. in steps of 5 or 10. If Appendix D does not show the exact degrees
of freedom that you want, use the t-value for the next lower d.f.

6 CONFIDENCE INTERVAL FOR A PROPORTION (π)


The distribution of a sample proportion p = x/n tends toward normality as n increases.
̅ . We say that p = x/n is a
Standard error σp will decrease as n increases like the standard error for 𝑿
consistent estimator of π.
Standard Error of the Proportion
The standard error of the proportion is denoted σp - depending on π, on n - being largest when the
population proportion is near π = .50 and becoming smaller when π ≈ 0 or 1.
The formula is symmetric

Confidence Interval for π

Narrowing the Interval


The width of the confidence interval for π depends on

• Sample size n
• Confidence level
• Sample proportion p
A narrower interval (i.e., more precision) → increase the sample size or reduce the confidence level
(e.g., from 95 percent to 90 percent)

Polls and Margin of Error


In polls and survey research, the margin of error is typically based on a 95 percent confidence level and the
initial assumption that π = .50
Each reduction in the margin of error requires a disproportionately larger sample size

Rule of Three
If in n independent trials, no events occur, the upper 95% confidence bound is approximately 3/n

8.7 ESTIMATING FROM FINITE POPULATIONS


𝑵−𝟏
The finite population correction factor (FPCF) √ reduces the margin of error and provides a more
𝑵−𝒏
precise interval estimate
8.8 SAMPLE SIZE DETERMINATION FOR A MEAN
Sample Size to Estimate μ
𝒛𝝈 𝟐
𝒏=( )
𝑬
Estimate σ
Method 1: Take a Preliminary Sample
Take a small preliminary sample and use the sample estimate s in place of σ. This method is the most
common, though its logic is somewhat circular (i.e., take a sample to plan a sample).

Method 2: Assume Uniform Population


Estimate upper and lower limits a and b and set σ = [(b - a)2/12 ]1/2 .

Method 3: Assume Normal Population


Estimate upper and lower bounds a and b and set σ = (b - a)/6. This assumes normality with most of the
data within μ + 3σ and μ - 3σ → the range is 6σ

Method 4: Poisson Arrivals


In the special case when λ is a Poisson arrival rate, then 𝝈 = √𝛌.

8.9 SAMPLE SIZE DETERMINATION FOR A PROPORTION


The formula for the required sample size for a proportion
𝒛 𝟐
𝒏 = ( ) 𝝅(𝟏 − 𝝅)
𝑬

Estimate π
Method 1: Assume That π = .50
Method 2: Take a Preliminary Sample
Take a small preliminary sample and insert p into the sample size formula in place of π
Method 3: Use a Prior Sample or Historical Data

8.10 CONFIDENCE INTERVAL FOR A POPULATION VARIANCE σ2


Chi-Square Distribution
If the population is normal → construct a confidence interval for the population variance σ2 using the chi-
square distribution with degrees of freedom equal to d.f. = n – 1
Lower-tail and upper-tail percentiles for the chi-square distribution (denoted XL2and XU2) can be found in
Appendix E.

(𝒏 − 𝟏)𝒔𝟐 𝟐
(𝒏 − 𝟏)𝒔𝟐
≤ 𝝈 ≤
𝑿𝟐𝑼 𝑿𝟐𝑳
Chapter 9
9.1 LOGIC OF HYPOTHESIS TESTING
The analyst states the assumption, called a hypothesis, in a format that can be tested using well-known
statistical procedures.
Hypothesis testing is an ongoing/iterative/continuous process.

Who Uses Hypothesis Testing?

STEP 1: STATE THE HYPOTHESES


Formulate a pair of mutually exclusive, collectively exhaustive statements about the world. One
statement or the other must be true, but they cannot both be true.
o H0: Null Hypothesis
o H1: Alternative Hypothesis
Efforts will be made to reject the null hypothesis (maintained hypothesis or research hypothesis)
If we reject H0 – null hypothesis → the alternative hypothesis H1 is the case.
H0 represents the status quo (e.g., the current state of affairs), while H 1 called as the action alternative because action
may be required if we reject H0 in favor of H1
a statement of the specific conditions when we decide to either reject
STEP 2: SPECIFY THE DECISION RULE or fail to reject the null hypothesis
Before collecting data to compare against the hypothesis, the researcher must specify how the evidence
will be used to reach a decision about the null hypothesis.

STEPS 3 AND 4: DATA COLLECTION AND DECISION MAKING


We compare the data with the hypothesis → using the decision rule → decide to reject or not reject
the null hypothesis

STEP 5: TAKE ACTION BASED ON DECISION


Appropriate action for the decision should relate back to the purpose of conducting the hypothesis test
in the first place.

9.2 TYPE I AND TYPE II ERROR


We have two possible choices concerning the null hypothesis. We either reject H0 or fail to reject H0

• Rejecting the null hypothesis when it is true is a Type I error (a false positive).
• Failure to reject the null hypothesis when it is false is a Type II error (a false negative).

Probability of Type I and Type II Errors


The probability of a Type I error (rejecting a true null hypothesis) is denoted α - level of significance

The probability of a Type II error (not rejecting a false hypothesis) is denoted β

The power of a test is the probability that a false hypothesis will be rejected. Reducing β would
correspondingly increase power (e.g. increase the sample size)

Relationship between α and β


Given two equivalent tests, we will choose the more powerful test (smaller β)
The larger critical value needed to reduce α makes it harder to reject H0, thereby increasing β.

Both α and β can be reduced simultaneously only by increasing the sample size

9.3 DECISION RULES AND CRITICAL VALUES


A statistical hypothesis: a statement about the value of a population parameter that we are interested in
The hypothesized value of the parameter is the center of interest.
Relying on sampling distribution and the standard error of the estimate to decide
One-Tailed and Two-Tailed Tests

The direction of the test is indicated by which way the inequality symbol points in H1:

o < indicates a left-tailed test


o ≠ indicates a two-tailed test
o > indicates a right-tailed test

Decision Rule
Compare a sample statistic to the hypothesized value of the population parameter stated in the null
hypothesis

• Extreme outcomes occurring in the left tail → reject the null hypothesis in a left-tailed test
• Extreme outcomes occurring in the right tail → reject the null hypothesis in a right-tailed test
The area under the sampling distribution curve that defines an extreme outcome: the rejection region
Calculating a test statistic that measures the difference between the sample statistic and the
hypothesized parameter
➔ A test statistic that falls in the shaded region → rejection of H0

Critical Value
The critical value: the boundary between the two regions (reject H0, do not reject H0).
The decision rule states what the critical value of the test statistic would have to be in order to reject
H0 at the chosen level of significance (α).
The choice of α should precede the calculation of the test statistic, thereby minimizing the temptation to select α

9.4 TESTING A MEAN: KNOWN POPULATION VARIANCE


Test Statistic
̅ and a benchmark μ0 in terms of
A test statistic measures the difference between a given sample mean 𝑿
the standard error of the mean.
Critical Value
Reject H0 if zcalc > + zα/2 or if zcalc < - zα/2
Otherwise fail to reject H0

p-Value Method
The p-value is a direct measure of the likelihood of the observed sample under H0
Compare the p-value with the level of significance.

• If the p-value is smaller than α, the sample contradicts the null hypothesis → reject H0

Reject H0 if P(Z > zcalc) < α


Otherwise fail to reject H0
o A large p-value (near 1.00) tends to support H0 → fail to reject H0
o A small p-value (near 0.00) tends to contradict H0 → reject H0

Two-Tailed Test
Reject H0 if zcalc > + zα/2 or if zcalc < - zα/2
Otherwise do not reject H0
USING THE P-VALUE APPROACH
In a two-tailed test, the decision rule using the p-value is the same as in a one-tailed test

Reject H0 if 2 x P(Z > zcalc) < α


Otherwise fail to reject H0
su tuong tu
Analogy to Confidence Intervals
A two-tailed hypothesis test at the 5% level of significance (α = .05) = asking the 95% confidence
interval for the mean includes the hypothesized mean

Reject H0 if 𝑯𝟎 ̅+ 𝝈 ; 𝑿
∉ [𝑿 ̅ − 𝝈]
√𝒏 √𝒏

9.5 TESTING A MEAN: UNKNOWN POPULATION VARIANCE


Involves categorical variables
Two possible outcomes

• Possesses characteristic of interest


• Does not possess characteristic of interest
Fraction or proportion of the population in the category of interest is denoted by

Using Student’s t
When the population standard deviation σ is unknown and the population may be assumed normal
(generally symmetric with no outliers)

➔ the test statistic follows the Student’s t distribution with d.f. = n - 1

SENSITIVITY TO α
Decision is affected by our choice of α. Example:

Using the p-Value


After the p-value is calculated, different analysts can compare it to the level of significance (α)
From Appendix D we can only get a range for the p-value.

p-value < α then reject H0


p-value > α then fail to reject H0
Confidence Interval versus Hypothesis Test
The two-tailed test at the M% level of significance α = a two-tailed (1-M)% confidence interval.
If the confidence interval does not contain H0 → reject H0

9.6 TESTING A PROPORTION


The test statistic, calculated from sample data: the difference between the sample proportion p and the
hypothesized proportion π0 divided by the standard error of the proportion σp:

π0 is a benchmark - does not come from a sample

Calculating the p-Value

Reject H0 if P(Z > zcalc) < α


Otherwise fail to reject H0

Two-Tailed Test
CALCULATING A P-VALUE FOR A TWO-TAILED TEST
In two-tailed test, p-value = 2 x P(Z > zcalc)
Reject H0 if 2 x P(Z > zcalc) < α
Otherwise fail to reject H0

Effect of α
The test statistic zcalc is the same regardless of our choice of α, however, our choice of α does affect the
decision.

Which level of significance is the “right” one depends on how much Type I error we are willing to allow.
Smaller Type I error leads to increased Type II error
Chapter 10
10.1 TWO-SAMPLE TESTS
• A one-sample test compares a sample estimate against a non-sample benchmark
• A Two-sample test compares two sample estimates with each other

Basis of Two-Sample Tests


Situations where two groups are to be compared:

• Before versus after


• Old versus new
• Experimental versus control
The null hypothesis H0: both samples were drawn from populations with the same parameter value
Two samples drawn from the same population → different estimates of a parameter due to chance.
If the two sample statistics differ by more than the amount attributable to chance → that the samples
came from populations with different parameter values
Test Procedure
The testing procedure is like that of one-sample tests.

10.2 COMPARING TWO MEANS: INDEPENDENT SAMPLES


• Samples are randomly and independently drawn
• Populations are normally distributed or both sample sizes are at least 30

Format of Hypotheses

Test Statistic
̅ 𝟏−𝑿
The sample statistic used to test the parameter μ1 - μ2 is 𝑿 ̅ 𝟐 . The test statistic will follow the same
general format as the z- and t-scores in chapter 9

CASE 1: KNOWN VARIANCES

Knowing the values of the population variances, σ12 and σ22, the test statistic: z-score

➔ Use the standard normal distribution to find p-values or critical values of zα.

CASE 2: UNKNOWN VARIANCES ASSUMED EQUAL

➔ Use the Student’s t distribution

Relying on sample estimates s12 and s22

By assuming that the population variances are equal → pool the sample variances by taking a weighted
average of s12 and s22 → calculate an estimate of the common population variance aka pooled
variance - denoted sp2
CASE 3: UNKNOWN VARIANCES ASSUMED UNEQUAL

Population variances are unknown and cannot be assumed to be equal

Replacing σ12 and σ22 with the sample variances s12 and s22
Common situation of testing for a zero difference (D0 = 0)

Test Statistic for Zero Difference of Means

10.3 CONFIDENCE INTERVAL FOR THE DIFFERENCE OF TWO MEANS,


μ1 - μ2
Using a confidence interval estimate to find a range within which the true difference might fall
If the confidence interval for the difference of two means includes zero
➔ there is no significant difference in means
UNKNOWN VARIANCES ASSUMED EQUAL
The difference of means follows a Student’s t distribution with d.f. = (n1 - 1) + (n2 - 1).

UNKNOWN VARIANCES ASSUMED UNEQUAL


Use the t distribution, adding the variances and using Welch’s formula for the degrees of freedom

10.4 COMPARING TWO MEANS: PAIRED SAMPLES


Paired Data
When sample data consist of n matched pairs, a different approach is required.
If the same individuals are observed twice but under different circumstances → paired comparison
If we treat the data as two independent samples, ignoring the dependence between the data pairs, the test is less
powerful

Paired t Test
In the paired t Test we define a new variable d = X1 - X2 as the difference between X1 and X2.
The two samples are reduced to one sample of n differences d1, d2, . . . , dn. Presenting the n observed
differences in column form:

or row form:
We calculate the mean 𝒅̅ and standard deviation sd of the sample of n differences d1, d2, . . . , dn with the
usual formulas for a mean and standard deviation.

The population variance of d is unknown → a paired t test using Student’s t with d.f. = n - 1 to compare
̅ with a hypothesized difference μd (usually μd = 0)
the sample mean difference 𝒅

Analogy to Confidence Interval


A two-tailed test for a zero difference = asking whether the confidence interval for the true mean
difference μd includes zero.

10.5 COMPARING TWO PROPORTIONS


Testing for Zero Difference: π1 - π2 = 0
The three possible pairs of hypotheses:
Sample Proportions
A “success” is any event of interest (not necessarily something desirable)

Pooled Proportion
If H0 is true → no difference between π1 and π2

➔ samples be pooled into one “big” sample → estimate the combined population proportion pc

Test Statistic
Testing for zero difference

Testing for Nonzero Difference (Optional)

10.6 CONFIDENCE INTERVAL FOR THE DIFFER ENCE OF TWO


PROPORTIONS, π1 - π2
A confidence interval for the difference of two population proportions, π1 - π2

The rule of thumb for assuming normality is that np ≥ 10 and n(1 - p) ≥ 10 for each sample
10.7 COMPARING TWO VARIANCES
Format of Hypotheses

An equivalent way to state these hypotheses is to look at the ratio of the two variances

The F Test
The test statistic is the ratio of the sample variances. Assuming the populations are normal, the test statistic
follows the F distribution

If the null hypothesis of equal variances is true, this ratio should be near 1:

Fcalc ≅ 1 (if H0 is true) Do not reject


OR Fcalc > FR or Fcalc < FL →reject the H0
F distribution:
• mean is always greater than 1
• mode (the “peak” of the distribution) is always less than 1

Two-Tailed F Test
Critical values for the F test are denoted FL (left tail) and FR (right tail)
A right-tail critical value FR: found from Appendix F using d.f1. and d.f2.
FR = Fdf1, df2 (right-tail critical F)

To obtain a left-tail critical value FL we reverse the numerator and denominator degrees of freedom
𝟏
𝑭𝑳 = (left-tail critical F with reversed df1 and df2)
𝑭𝒅.𝒇𝟐,𝒅.𝒇𝟏
CHAPTER 11
11.1 OVERVIEW OF ANOVA
Analysis of Variance (ANOVA) allows one to compare more than two means simultaneously.
Variation in Y about its mean is explained by one or more categorical independent variables (the
factors) or is unexplained (random error).

ANOVA is a comparison of means.


Each factor has two or more levels
Treatment: possible value of a factor or combination of factors
Example: examine whether Gender (Male, Female) and Region (North, Central, South) affect Income.

• Two factors: Gender, Region


• Gender has two levels: Male, Female
• Region has three levels: North, Central, South
• Six treatments: (Male, North); (Male, Central); (Male, South); (Female, North); (Female, Central); (Female, South)

ONE FACTOR ANOVA

N-FACTOR ANOVA

ANOVA Assumptions
Analysis of variance assumes that

• Observations on Y are independent.


• Populations being sampled are normal.
• Populations being sampled have equal variances
Test if each factor has a significant effect on Y:

• H0: µ1 = µ2 = µ3 =…= µc
• H1: Not all the means are equal
If we cannot reject H0, we conclude that observations within each treatment have the same mean µ.

11.2 ONE-FACTOR ANOVA (COMPLETELY RANDOMIZED MODEL)


Data Format
Only interested in comparing the means of c groups (treatments or factor levels) → one-factor ANOVA
Sample sizes within each treatment do not need to be equal.
The total number of observations:

n = n1 + n2 + … + n c

Hypotheses to Be Tested
The question of interest is whether the mean of Y varies from treatment to treatment.
o H0: μ1 = μ2 = . . . = μc (all the treatment means are equal)
o H1: Not all the means are equal (at least one pair of treatment means differs)

One-Factor ANOVA as a Linear Model


Observations in treatment j (yj) came from a population with a common mean (μ) + a treatment effect
(Tj) + random error (εij) where 𝑻𝒋 = 𝒚
̅𝒋 − 𝒚
̅

Random error is assumed to be normally distributed with zero mean and the same variance.
If interested only in what happen to the response for the particular levels of the factor (a fixed-effects
model)
If the null hypothesis is true (Tj = 0 for all j) the ANOVA model is:

◼ The same mean in all groups, or no factor effect.

If the null hypothesis is false, in that case the Tj that are negative (below μ) must be offset by the Tj that are
positive (above μ) when weighted by sample size.

Decomposition of Variation

Group Means
The mean of each group is calculated in the usual way by summing the observations in the treatment and
dividing by the sample size

̅ can be calculated by
The overall sample mean or grand mean 𝒚

o summing all the observations and dividing by n


o taking a weighted average of the c sample means
Partitioned Sum of Squares
For a given observation yij, the following relationship holds

This important relationship may be expressed simply as

One-Factor ANOVA Table


Test Statistic
The F statistic is the ratio of the variance due to treatments to the variance due to error.

• MSB is the mean square between treatments


• MSE is the mean square within treatments

F test for equal treatment means is always a right-tailed test.


If there is little difference among treatments

̅𝒋 would be near the overall mean 𝒚


➔ MSB to be near zero because the treatment means 𝒚 ̅.
when F is near zero → not expect to reject H0 (hypothesis of equal group means)

Decision Rule
Use Appendix F to obtain the right-tail critical value of F - denoted Fdf1,df2 or Fc-1,n-c

11.3 MULTIPLE COMPARISONS


Tukey’s Test
• Do after rejection of equal means in ANOVA
• Tells which population means are significantly different

e.g.: μ1 = μ2 ≠ μ3
Tukey’s studentized range test is a multiple comparison test
Tukey’s is a two-tailed test for simultaneous comparison of equality of paired means from c groups
The hypotheses to compare group j with group k:

Tukey’s test statistic

Reject H0 if Tcalc > Tc, n-c


Tc, n-c :critical value for the level of significance
11.4 TESTS FOR HOMOGENEITY OF VARIANCES
ANOVA Assumption
• ANOVA assumes that observations on the response variable are from normally distributed
populations with the same variance.
• The one-factor ANOVA test is only slightly affected by inequality of variance when group sizes are
equal.
• We can easily test this assumption of homogeneous variances by using Hartley’s Fmax Test.

Hartley’s Test

Hartley’s test statistic is the ratio of the largest sample variance to the smallest sample variance:

𝒔𝟐𝒎𝒂𝒙
𝑯𝒄𝒂𝒍𝒄 = 𝟐
𝒔𝒎𝒊𝒏
o Do not reject if Hcalc ≈ 1 the variances are the same
o Reject H0 if Hcalc > Hcritical
Critical values of Hcritical may be found in Harley’s critical value table using degrees of freedom:

• Numerator: df1 = c
𝒏
• Denominator: df2 = − 𝟏
𝒄

11.5 TWO-FACTOR ANOVA WITHOUT REPLICATION (RANDOMIZED


BLOCK MODEL)
Data Format
Two factors A and B may affect Y

• Each row is a level of factor A


• Each column is a level of factor B
• Each factor combination is observed exactly once
• The mean of Y can be computed either across the rows or down the columns
• The grand mean 𝒚 ̅ is the sum of all data values divided by the sample size rc
Two-Factor ANOVA Model
Expressed in linear form, the two-factor ANOVA model:

where

• yjk = observed data value in row j and column k


• μ = common mean for all treatments
• Aj = effect of row factor A (j = 1, 2, . . . , r)
• Bk = effect of column factor B (k = 1, 2, . . . , c)
• εjk = random error normally distributed with zero mean and the same variance for all treatments
Hypotheses to Be Tested
If we are interested only in what happens to the response for the particular levels of the factors:

FACTOR A
• H0: A1 = A2 =. . . = Ar = 0 (row means are the same)
• H1: Not all the Aj are equal to zero (row means differ)

FACTOR B
• H0: B1 = B2 =. . . = BC = 0 (column means are the same)
• H1: Not all the BK are equal to zero (column means differ)
If we are unable to reject either null hypothesis
➔ all variation in Y: a random disturbance around the mean μ:

yjk = μ + εjk
Randomized Block Model
When only one factor is of research interest and the other factor is merely used to control for potential
confounding/perplexing/staggering influences
In the randomized block model
• the column effects: treatments (as in one-factor ANOVA → the effect of interest)
• the row effects: blocks
A randomized block model looks like a two-factor ANOVA and is computed exactly like a two-factor ANOVA
Interpretation may resemble a one-f actor ANOVA since only the column effects (treatments) are of interest
Format of Calculation of Nonreplicated Two-Factor ANOVA

The total sum of squares:

SST = SSA + SSB + SSE


where
• SST = total sum of squared deviations about the mean
• SSA = between rows sum of squares (effect of factor A)
• SSB = between columns sum of squares (effect of factor B)
• SSE = error sum of squares (residual variation)

Limitations of Two-Factor ANOVA without Replication


When replication is impossible or extremely expensive, two-factor ANOVA without replication
must suffice

11.6 TWO-FACTOR ANOVA WITH REPLICATION (FULL FACTORIAL


MODEL)
What Does Replication Accomplish?
• With multiple observations within each cell → more detailed statistical tests
• With an equal number of observations in each cell (balanced data)
➔ a two-factor ANOVA model with replication
• Replication allows to test
o the factors’ main effects
o an interaction effect
This model is called the full factorial model. In linear model format:

yijk = μ + Aj + Bk + ABjk + εijk


where

• yijk = observation i for row j and column k (i = 1, 2, . . . , m)


• μ = common mean for all treatments
• Aj = effect attributed to factor A in row j (j = 1, 2, . . . , r)
• Bk = effect attributed to factor B in column k (k = 1, 2, . . . , c)
• ABjk = effect attributed to interaction between factors A and B
• εijk = random error (normally distributed, zero mean, same variance for all treatments)
Format of Hypotheses
FACTOR A: ROW EFFECT
• H0: A1 = A2 = . . . = Ar = 0 (row means are the same)
• H1: Not all the Aj are equal to zero (row means differ)

FACTOR B: COLUMN EFFECT


• H0: B1 = B2 = . . . = Bc = 0 (column means are the same)
• H1: Not all the Bk are equal to zero (column means differ)

INTERACTION EFFECT
• H0: All the ABjk = 0 (there is no interaction effect)
• H1: Not all ABjk = 0 (there is an interaction effect)

Format of Data
Data Format of Replicated Two-Factor ANOVA

Sources of Variation
The total sum of squares is partitioned into four components:

SST = SSA + SSB + SSI + SSE


where

• SST = total sum of squared deviations about the mean


• SSA = between rows sum of squares (effect of factor A)
• SSB = between columns sum of squares (effect of factor B)
• SSI = interaction sum of squares (effect of AB)
• SSE = error sum of squares (residual variation)
Interaction Effect

• In the absence of an interaction, the lines will be roughly parallel or will tend to move in the same
direction at the same time.
• A strong interaction, the lines will have differing slopes and will tend to cross one another

Tukey Tests of Pairs of Means


Multiple comparison

• Significant differences at α = .05 between clinics C, D and between suppliers (1, 4) and (3, 5).
• At α = .01 there is also a significant difference in means between one pair of suppliers (4, 5).
Significance versus Importance
MegaStat’s table of means (Figure 11.23) allows us to explore these differences further and to assess the
question of importance as well as significance.

The largest differences in means between clinics or suppliers are about 2 days. Such a small difference
might be unimportant most of the time.
However, if their inventory is low, a 2-day difference could be important
Chapter 12
12.1 VISUAL DISPLAYS AND CORRELATION ANALYSIS
Visual Displays
Analysis of bivariate data (i.e., two variables) typically begins with a scatter plot that displays each
observed data pair (xi, yi) as a dot on an X-Y grid.
➔ initial idea of the relationship between two random variables.

Correlation Coefficient
Sample correlation coefficient (Pearson correlation coefficient) - denoted r - measures the degree of
linearity in the relationship between two random variables X and Y.

Its value will fall in the interval [-1, 1].

• Negative correlation:
o xi is above its mean
o yi is below its mean
• Positive correlation: xi and yi are above/below their means at the same time
Three terms called sums of squares

The formula for the sample correlation coefficient

√𝑺𝑺𝒙𝒚
𝒓=
√𝑺𝑺𝒙𝒙 √𝑺𝑺𝒚𝒚
Correlation coefficient only measures the degree of linear relationship between X and Y
Tests for Significant Correlation Using Student’s t
The sample correlation coefficient r is an estimate of the population correlation coefficient ρ

To test the hypothesis H0: ρ = 0, the test statistic

Compare this t test statistic with a critical value of t for a one-tailed or two-tailed test from Appendix D
using d.f. = n - 2 and any desired α.

After calculating tcalc → Find p-value by using Excel’s function =T.DIST.2T(tcalc,deg_freedom)

Critical Value for Correlation Coefficient


Equivalent approach → Calculate a critical value for the correlation coefficient

First: look up the critical value of t from Appendix D with d.f. = n - 2 degrees of freedom and chosen α

Then, the critical value of the correlation coefficient, rcritical

𝒕
𝒓𝒄𝒓𝒊𝒕𝒊𝒄𝒂𝒍 =
√𝒕𝟐 + 𝒏 − 𝟐
• a benchmark for the correlation coefficient
• no p-value
• inflexible when changing α
In very large samples, even very small correlations could be “significant”
A larger sample does not mean that the correlation is stronger nor does its increased significance imply
increased importance.
12.2 SIMPLE REGRESSION
What Is Simple Regression?
The simple linear model in slope-intercept form: Y = slope × X + y-intercept. In statistics this straight-line
model is referred as a simple regression equation.

• the Y variable as the response variable (the dependent variable)


• the X variable as the predictor variable (the independent variable)
Only the dependent variable (not the independent variable) is treated as a random variable

Interpreting an Estimated Regression Equation


Cause and effect is not proven by a simple regression
➔ cannot assume that the explanatory variable is “causing” the variation in the response variable

Prediction Using Regression


Predictions from our fitted regression model are stronger within the range of our sample x values.
The relationship seen in the scatter plot may not be true for values far outside our observed x range.
Extrapolation outside the observed range of x is always tempting but should be approached with
caution.

12.3 REGRESSION MODELS


Model and Parameters
The regression model’s unknown population parameters denoted by β0 (the intercept) and β1 (the
slope).

y = β0 + β1x + ε (population regression model)

Inclusion of a random error ε is necessary because other unspecified variables also may affect Y

The regression model without the error term represents the expected value of Y for a given x value
called simple regression equation

E(Y|x) = β0 + β1x (simple regression equation)

The regression assumptions.

• Assumption 1: The errors are normally distributed with mean 0 and standard deviation σ.
• Assumption 2: The errors have constant variance, σ2.
• Assumption 3: The errors are independent of each other.
The regression equation used to predict the expected value of Y for a given value of X:

̌ = 𝒃𝟎 + 𝒃𝟏 𝒙 (estimated regression equation)


𝒚
• coefficients b0 (estimated intercept)
• b1 (estimated slope)

̌i: a residual - ei
The difference between the observed value yi and its estimated value 𝒚

The residual is the vertical distance between each yi and the estimated regression line on a scatter
plot of (xi,yi) values.

12.4 ORDINARY LEAST SQUARES FORMULAS


Slope and Intercept
The ordinary least squares method (OLS method): estimate a regression so as to ensure the best fit
➔ selected the slope and intercept → residuals are as small as possible.
Residuals be either positive or negative, and around the regression line always sum to zero

̌
The fitted coefficients b0 and b1 are chosen so that the fitted linear model 𝒚 = 𝒃𝟎 + 𝒃𝟏 𝒙 has the
smallest possible sum of squared residuals (SSE):

Differential calculus used to obtain the coefficient estimators b0 and b1 that minimize SSE
The OLS formula for the slope can also be:

OLS Regression Line Always Passes Through (𝒙 ̅)


̅, 𝒚

Sources of Variation in Y
The total variation as a sum of squares (SST), split the SST into two parts:

• SST = total sum of squares


o Measures the variation of the yi values around their mean, y
• SSR = regression sum of squares
o Explained variation attributable to the linear relationship between x and y
• SSE = error sum of squares
o Variation attributable to factors other than the linear relationship between x and y

Coefficient of Determination
The coefficient of determination: the portion of the total variation in the dependent variable that is
explained by variation in the independent variable

The coefficient of determination called R-squared - denoted as R2

SSR SSE
R2 = = 1−
SST SST noted that 0  R2  1
Examples of Approximate R2 Values: The range of the coefficient of determination is 0 ≤ R2 ≤ 1.

12.5 TESTS FOR SIGNIFICANCE


Standard Error of Regression
An estimator for the variance of the population model error
n

e 2
i
SSE
σˆ 2 = s2e = i=1
=
n−2 n−2
Division by n – 2 → the simple regression model uses two estimated parameters, b0 and b1

se = s2e
the standard error of the estimate

COMPARING STANDARD ERRORS

The magnitude of se should always be judged relative to the size of the y values in the sample data
INFERENCES ABOUT THE REGRESSION MODEL
The variance of the regression slope coefficient (b1) is estimated by

s2e s2e
sb =
2
=
1
 (xi − x) (n − 1)s2x
2

where:

𝑺𝒃𝟏 = Estimate of the standard error of the least squares slope

𝑺𝑺𝑬
𝒔𝒆 = √𝒏−𝟐 = Standard error of the estimate

Confidence Intervals for Slope and Intercept

These standard errors → construct confidence intervals for the true slope and intercept
using Student’s t with d.f. = n - 2 degrees of freedom and any desired confidence level.

Hypothesis Tests
if β1 = 0 ➔ X does not influence Y
→ the regression model collapses to a constant β0 + a random error term:

For either coefficient, we use a t test with d.f. = n - 2. The hypotheses and test statistics
SLOPE VERSUS CORRELATION
The test for zero slope is the same as the test for zero correlation.

➔ The t test for zero slope will always yield exactly the same tcalc as the t test for zero correlation

12.6 ANALYSIS OF VARIANCE: OVERALL FIT


Decomposition of Variance

F Statistic for Overall Fit


To test a regression for overall significance → an F test to compare the explained (SSR) and
unexplained (SSE) sums of squares
ANOVA Table for simple regression

The formula for the F test statistic

F Test p-Value and t Test p-Value


the F test always yields the same p-value as a two-tailed t test for zero slope → same p-value as a two-
tailed test for zero correlation

The relationship between the test statistics is Fcalc = t2calc


12.7 CONFIDENCE AND PREDICTION INTERVALS FOR Y
Construct an Interval Estimate for Y

Quick Rules for Confidence and Prediction Intervals


A really quick 95% interval → plug in t = 2 (since most 95 percent t-values are not far from 2)

You might also like