K 14slr PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Simple Linear

Regression Models
Raj Jain
Washington University in Saint Louis
Saint Louis, MO 63130
[email protected]
These slides are available on-line at:
http://www.cse.wustl.edu/~jain/cse567-08/
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-1
Overview

1. Definition of a Good Model


2. Estimation of Model parameters
3. Allocation of Variation
4. Standard deviation of Errors
5. Confidence Intervals for Regression Parameters
6. Confidence Intervals for Predictions
7. Visual Tests for verifying Regression Assumption
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-2
Simple Linear Regression Models
! Regression Model: Predict a response for a given set
of predictor variables.
! Response Variable: Estimated variable
! Predictor Variables: Variables used to predict the
response. predictors or factors
! Linear Regression Models: Response is a linear
function of predictors.
! Simple Linear Regression Models:
Only one predictor

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-3
Definition of a Good Model

y y y

x x x

Good Good Bad

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-4
Good Model (Cont)
! Regression models attempt to minimize the distance
measured vertically between the observation point
and the model line (or curve).
! The length of the line segment is called residual,
modeling error, or simply error.
! The negative and positive errors should cancel out
⇒ Zero overall error
Many lines will satisfy this criterion.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-5
Good Model (Cont)
! Choose the line that minimizes the sum of squares of
the errors.

where, is the predicted response when the


predictor variable is x. The parameter b0 and b1 are
fixed regression parameters to be determined from the
data.
! Given n observation pairs {(x1, y1), …, (xn, yn)}, the
estimated response for the ith observation is:

! The error is:


Washington University in St. Louis CSE567M ©2008 Raj Jain
14-6
Good Model (Cont)
! The best linear model minimizes the sum of squared
errors (SSE):

subject to the constraint that the mean error is zero:

! This is equivalent to minimizing the variance of errors


(see Exercise).

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-7
Estimation of Model Parameters
! Regression parameters that give minimum error
variance are:

and

! where,

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-8
Example 14.1
! The number of disk I/O's and processor times of
seven programs were measured as: (14, 2), (16, 5),
(27, 7), (42, 9), (39, 10), (50, 13), (83, 20)
! For this data: n=7, Σ xy=3375, Σ x=271, Σ x2=13,855,
Σ y=66, Σ y2=828, = 38.71, = 9.43. Therefore,

! The desired linear model is:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-9
Example 14.1 (Cont)

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-10
Example 14. (Cont)
! Error Computation

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-11
Derivation of Regression Parameters
! The error in the ith observation is:

! For a sample of n observations, the mean error is:

! Setting mean error to zero, we obtain:

! Substituting b0 in the error expression, we get:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-12
Derivation of Regression Parameters (Cont)
! The sum of squared errors SSE is:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-13
Derivation (Cont)
! Differentiating this equation with respect to b1 and
equating the result to zero:

! That is,

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-14
Allocation of Variation
! Error variance without Regression = Variance of the response

and

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-15
Allocation of Variation (Cont)
! The sum of squared errors without regression would be:

! This is called total sum of squares or (SST). It is a measure of


y's variability and is called variation of y. SST can be
computed as follows:

! Where, SSY is the sum of squares of y (or Σ y2). SS0 is the sum
of squares of and is equal to .

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-16
Allocation of Variation (Cont)
! The difference between SST and SSE is the sum of squares
explained by the regression. It is called SSR:

or

! The fraction of the variation that is explained determines the


goodness of the regression and is called the coefficient of
determination, R2:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-17
Allocation of Variation (Cont)
! The higher the value of R2, the better the regression.
R2=1 ⇒ Perfect fit R2=0 ⇒ No fit

! Coefficient of Determination = {Correlation Coefficient (x,y)}2


! Shortcut formula for SSE:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-18
Example 14.2
! For the disk I/O-CPU time data of Example 14.1:

! The regression explains 97% of CPU time's variation.


Washington University in St. Louis CSE567M ©2008 Raj Jain
14-19
Standard Deviation of Errors
! Since errors are obtained after calculating two regression
parameters from the data, errors have n-2 degrees of freedom

! SSE/(n-2) is called mean squared errors or (MSE).


! Standard deviation of errors = square root of MSE.
! SSY has n degrees of freedom since it is obtained from n
independent observations without estimating any parameters.
! SS0 has just one degree of freedom since it can be computed
simply from
! SST has n-1 degrees of freedom, since one parameter
must be calculated from the data before SST can be computed.
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-20
Standard Deviation of Errors (Cont)
! SSR, which is the difference between SST and SSE,
has the remaining one degree of freedom.
! Overall,

! Notice that the degrees of freedom add just the way


the sums of squares do.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-21
Example 14.3
! For the disk I/O-CPU data of Example 14.1, the
degrees of freedom of the sums are:

! The mean squared error is:

! The standard deviation of errors is:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-22
Confidence Intervals for Regression Params
! Regression coefficients b0 and b1 are estimates from a single
sample of size n ⇒ Random
⇒ Using another sample, the estimates may be different. If β0
and β1 are true parameters of the population. That is,

! Computed coefficients b0 and b1 are estimates of β0 and β1,


respectively.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-23
Confidence Intervals (Cont)
! The 100(1-α)% confidence intervals for b0 and b1 can be be
computed using t[1-α/2; n-2] --- the 1-α/2 quantile of a t variate
with n-2 degrees of freedom. The confidence intervals are:

And

! If a confidence interval includes zero, then the regression


parameter cannot be considered different from zero at the at
100(1-α)% confidence level.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-24
Example 14.4
! For the disk I/O and CPU data of Example 14.1, we have n=7,
=38.71, =13,855, and se=1.0834.
! Standard deviations of b0 and b1 are:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-25
Example 14.4 (Cont)
! From Appendix Table A.4, the 0.95-quantile of a t-variate with
5 degrees of freedom is 2.015.
⇒ 90% confidence interval for b0 is:

! Since, the confidence interval includes zero, the hypothesis that


this parameter is zero cannot be rejected at 0.10 significance
level. ⇒ b0 is essentially zero.
! 90% Confidence Interval for b1 is:

! Since the confidence interval does not include zero, the slope
b1 is significantly different from zero at this confidence level.
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-26
Case Study 14.1: Remote Procedure Call

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-27
Case Study 14.1 (Cont)
! UNIX:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-28
Case Study 14.1 (Cont)
! ARGUS:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-29
Case Study 14.1 (Cont)
! Best linear models are:

! The regressions explain 81% and 75% of the


variation, respectively.
Does ARGUS takes larger time per byte as well as a
larger set up time per call than UNIX?

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-30
Case Study 14.1 (Cont)

! Intervals for intercepts overlap while those of the slopes do not.


⇒ Set up times are not significantly different in the two
systems while the per byte times (slopes) are different.
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-31
Confidence Intervals for Predictions

! This is only the mean value of the predicted response. Standard


deviation of the mean of a future sample of m observations is:

! m =1 ⇒ Standard deviation of a single future observation:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-32
CI for Predictions (Cont)
! m = ∞ ⇒ Standard deviation of the mean of a large
number of future observations at xp:

! 100(1-α)% confidence interval for the mean can be


constructed using a t quantile read at n-2 degrees of
freedom.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-33
CI for Predictions (Cont)
! Goodness of the prediction decreases as we move
away from the center.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-34
Example 14.5
! Using the disk I/O and CPU time data of Example
14.1, let us estimate the CPU time for a program with
100 disk I/O's.

! For a program with 100 disk I/O's,


the mean CPU time is:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-35
Example 14.5 (Cont)
! The standard deviation of the predicted mean of a large number
of observations is:

! From Table A.4, the 0.95-quantile of the t-variate with 5


degrees of freedom is 2.015.
⇒ 90% CI for the predicted mean

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-36
Example 14.5 (Cont)
! CPU time of a single future program with 100 disk
I/O's:

! 90% CI for a single prediction:

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-37
Visual Tests for Regression Assumptions
Regression assumptions:
1. The true relationship between the response variable y
and the predictor variable x is linear.
2. The predictor variable x is non-stochastic and it is
measured without any error.
3. The model errors are statistically independent.
4. The errors are normally distributed with zero mean
and a constant standard deviation.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-38
1. Linear Relationship: Visual Test
! Scatter plot of y versus x ⇒ Linear or nonlinear relationship

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-39
2. Independent Errors: Visual Test
1. Scatter plot of εi versus the predicted response

! All tests for independence simply try to find dependence.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-40
Independent Errors (Cont)
2. Plot the residuals as a function of the experiment number

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-41
3. Normally Distributed Errors: Test
! Prepare a normal quantile-quantile plot of errors.
Linear ⇒ the assumption is satisfied.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-42
4. Constant Standard Deviation of Errors
! Also known as homoscedasticity

! Trend ⇒ Try curvilinear regression or transformation


Washington University in St. Louis CSE567M ©2008 Raj Jain
14-43
Example 14.6
For the disk I/O and CPU time data of Example 14.1

Residual Quantile
CPU time in ms

Residual

Number of disk I/Os Predicted Response Normal Quantile

1. Relationship is linear
2. No trend in residuals ⇒ Seem independent
3. Linear normal quantile-quantile plot ⇒ Larger deviations at
lower values but all values are small
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-44
Example 14.7: RPC Performance

Residual Quantile
Residual

Predicted Response Normal Quantile


1. Larger errors at larger responses
2. Normality of errors is questionable
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-45
Summary

! Terminology: Simple Linear Regression model, Sums of


Squares, Mean Squares, degrees of freedom, percent of
variation explained, Coefficient of determination, correlation
coefficient
! Regression parameters as well as the predicted responses have
confidence intervals
! It is important to verify assumptions of linearity, error
independence, error normality ⇒ Visual tests
Washington University in St. Louis CSE567M ©2008 Raj Jain
14-46
Exercise 14.7
! The time to encrypt a k byte record using an encryption
technique is shown in the following table. Fit a linear
regression model to this data. Use visual tests to verify the
regression assumptions.

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-53
Exercise 2.1
! From published literature, select an article or a report
that presents results of a performance evaluation
study. Make a list of good and bad points of the study.
What would you do different, if you were asked to
repeat the study?

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-54
Homework 14
! Read Chapter 14
! Submit answers to exercise 14.7
! Submit answer to exercise 2.1

Washington University in St. Louis CSE567M ©2008 Raj Jain


14-55

You might also like