# CS代考计算机代写 Homework #2 Solution Exercises 2.3 and 2.10

Homework #2 Solution Exercises 2.3 and 2.10
MTB > WOPEN “E:KurtDocumentsise525Blackboardmm2- 3_2018.DAT”;
SUBC> FTYPE;
SUBC> TEXT;
SUBC> FIELD;
SUBC> COMMA;
SUBC> TDELIMITER;
SUBC> DOUBLEQUOTE;
SUBC> DECSEP;
SUBC> PERIOD;
SUBC> DATA;
SUBC> IGNOREBLANKROWS;
SUBC> EQUALCOLUMNS;
Regression Analysis: y versus x2, x3, x4, x5 Analysis of Variance
Comment 2: 2.3.b)
Source DF Regression 4 x2 1 x3 1 x4 1 x5 1 Error 14 Total 18
Adj SS 22.3119 0.0099 11.4305 2.8328 0.6343 10.9091 33.2211
Adj MS F-Value 5.5780 7.16 0.0099 0.01
P-Value 0.002 0.912 0.002 0.077 0.382
Model Summary
Comment 5: 2.10.b)
Worksheet was saved on Mon Jan 22 2018
Results for: mm2-3_2018.DAT
S R-sq 0.882737 67.16%
R-sq(pred) 47.30%
Standard Errors for the coefficients
Regress;
Response ‘y’;
Nodefault;
Continuous ‘x2’ – ‘x5’;
Terms x2 x3 x4 x5;
Coefficients
Comment 6: 2.10.c)
MTB >
SUBC>
SUBC>
SUBC>
SUBC>
SUBC> Constant;
SUBC> Unstandardized;
SUBC> Tmethod;
Term Coef Constant 7.46 x2 -0.030 x3 0.521 x4 -0.1018 x5 -2.16
SE Coef T-Value 7.23 1.03 0.263 -0.11 0.136 3.83 0.0534 -1.91 2.39 -0.90
P-Value 0.320 0.912
t-test statistics and VIF p-values.
11.4305 14.67 2.8328 3.64 0.6343 0.81 0.7792
SUBC>
SUBC>
SUBC>
SUBC>
Retrieving worksheet from file: ¡®E:KurtDocumentsise525Blackboardmm2- 3_2018.DAT¡¯
0.002 1.07
0.077
0.382 1.36
p-value for regression 0.002 < ¦Á, regression is significant Comment 4: 2.10.a) SHEET 1; VNAMES -1; FIRST 1; NROWS 19. estimated ¦Ò 2 = 0.7792 Since 0.912 > ¦Á,
x2 does not contribute 1.44 to the model
SUBC> Tanova;
SUBC> Tsummary;
SUBC> Tcoefficients;
SUBC> Tequation;
SUBC> TDiagnostics 0.
MTB >
1.41

Regression Equation
y = 7.46-0.030×2+0.521×3-0.1018×4-2.16×5
Comment 1: 2.3.a)
Fits and Diagnostics for Unusual Observations
Obs y Fit Resid
Std Resid -2.07 R
2 8.300 10.065 -1.765 R Large residual
Prediction for y Regression Equation
MTB > Predict ‘y’;
SUBC> Nodefault;
SUBC> KPredictors 20 30 90 2;
SUBC> TEquation;
SUBC> TPrediction.
MTB >
y = 7.46-0.030×2+0.521×3-0.1018×4-2.16×5
Settings
Variable Setting
x2 20
x3 30
x4 90
x52
Comment 3: 2.3.c)
Prediction
Fit SE Fit 8.99568 0.472445
95% CI (7.98238, 10.0090)
95% PI (6.84829, 11.1431)
regression model
predicted pull strength y = 8.99
Note: Standard Error of Fit is 0.472

Part 2
Stepwise Regression
Regression Analysis: y versus x2, x3, x4, x5 Stepwise Selection of Terms
MTB > Regress;
SUBC> Response ‘y’;
SUBC> Nodefault;
SUBC> Continuous ‘x2’ – ‘x5’;
SUBC> Terms x2 x3 x4 x5;
SUBC> Constant;
SUBC> Unstandardized;
Candidate terms: x2, x3, x4, x5 —-Step 1—-
——Step 2—– Coef
Constant x3
x4
4.66
0.511 0.001
In stepwise method, predictor enters model if p-value < 0.05; and, predictor will be removed if p-value > 0.05.
Stepwise;
AEnter 0.05;
ARemove 0.05;
P 0.593 0.001
P
Coef -8.97
Comment 7:
SUBC>
SUBC>
SUBC>
SUBC> Hierarchical;
SUBC> Always;
SUBC> Tmsdetails;
SUBC> Full;
S 1.01072
0.855415 64.76% 60.35% 53.79% 2.02
SUBC> Tmethod;
SUBC> Tanova;
SUBC> Tsummary;
SUBC> Tcoefficients;
SUBC> Tequation;
R-sq 47.72%
R-sq(pred) 35.08%
Mallows¡¯ Cp 7.29 ¦Á to enter = 0.05, ¦Á to remove = 0.05
SUBC> MTB >
TDiagnostics 0.
Analysis of Variance
Using stepwise method, final model contains only x3 and x4 predictors.
Source DF Regression 2
F-Value P-Value 14.70 0.000 15.29 0.001
p-value for regression = 0.000 This is smaller (more significant) than full model.
x3
1
x4 Error Total
1 16 18
5.659 11.708 33.221
5.6588 0.7317
7.73 0.013
Note also, ¦Ò 2 is smaller than for full model, and R-sq(adj) is larger.
-0.1242 0.013
Comment 8:

Model Summary
S R-sq 0.855415 64.76%
R-sq(pred) 53.79%
Coefficients
Term Coef Constant 4.66 x3 0.511 x4 -0.1242
SE Coef 6.39 0.131 0.0447
T-Value 0.73 3.91 -2.78
P-Value VIF 0.477
Regression Equation
y = 4.66+0.511×3-0.1242×4
Fits and Diagnostics for Unusual Observations
Obs y Fit Resid Std Resid
2 8.300 10.084 -1.784 -2.15 R R Large residual
0.001 1.05 0.013 1.05

Backwards Elimination
Regression Analysis: y versus x2, x3, x4, x5 Backward Elimination of Terms
MTB > Name C8 “RESI”.
MTB > Regress;
SUBC> Response ‘y’;
SUBC> Nodefault;
SUBC> Continuous ‘x2’ – ‘x5’;
SUBC> Terms x2 x3 x4 x5;
SUBC> Constant;
Candidate terms: x2, x3, x4, x5
Constant 7.46 x2 -0.030 x3 0.521 x4 -0.1018 x5 -2.16
7.31
4.66
SUBC> Unstandardized;
SUBC> Backward;
SUBC> ARemove 0.05;
SUBC> Hierarchical;
SUBC> Always;
SUBC> Tmsdetails;
SUBC> Full;
——Step 1—– Coef
P
——Step 2—– Coef
P
——Step 3—– Coef P
S 0.882737 R-sq 67.16% R-sq(adj) 57.78% R-sq(pred) 47.30%
0.853192 67.13% 60.56% 53.00% 3.01
0.855415 64.76% 60.35% 53.79% 2.02
SUBC> Tmethod;
SUBC> Tanova;
SUBC> Tsummary;
SUBC> Tcoefficients;
Mallows¡¯ Cp
¦Á to remove = 0.05
5.00
Analysis of Variance
Comment 9:
Source DF Regression 2
x3 1
x4 1
Error 16 Total 18
F-Value P-Value
In backwards elimination method, predictor will be removed if p-value > 0.05.
5.659 11.708 33.221
5.6588 0.7317
14.70 15.29 7.73
0.000 0.001 0.013
The full model shows x2 with p-value = 0.912. So, x2 is removed first.
0.912 0.002 0.077 0.382
0.519 -0.1038 -2.26
0.001 0.050 0.314
0.511 0.001 -0.1242 0.013
The second model shows x5 with p-value = 0.314. So, x5 is removed next.
SUBC>
SUBC>
SUBC>
MTB >
Tequation;
TDiagnostics 0;
Residuals ‘RESI’.

Model Summary
S R-sq 0.855415 64.76%
R-sq(pred) 53.79%
Comment 10:
Coefficients
The third model shows both predictors with p-value < 0.05. It is the final backwards elimination model. So, we store the residuals for analysis. Term Coef Constant 4.66 x3 0.511 x4 -0.1242 SE Coef 6.39 0.131 0.0447 T-Value 0.73 3.91 -2.78 P-Value VIF 0.477 Regression Equation y = 4.66+0.511x3-0.1242x4 Fits and Diagnostics for Unusual Observations Obs y Fit Resid Std Resid 2 8.300 10.084 -1.784 -2.15 R R Large residual 0.001 1.05 0.013 1.05 zscore Prediction for y Regression Equation MTB > Predict ‘y’;
SUBC> Nodefault;
SUBC> KPredictors 30 90;
SUBC> TEquation;
y = 4.66+0.511×3-0.1242×4
Settings
predicted pull strength
y = 8.82
Note: SE Fit is 0.343 for reduced model. This is smaller than when using full model.
Variable Setting
x3 x4
30 90
Prediction
Fit 8.81958
SE Fit 0.343267
95% CI (8.09189, 9.54727)
95% PI (6.86562, 10.7735)
2
1
0
-1
Normal probability plot for residuals shows nearly straight line pattern. Demonstrates that normality assumption is satisfied.
Scatterplot of zscore vs RESI
MTB > name c9 ‘zscore’
MTB > let ‘zscore’ = NSCOR(‘RESI’) MTB > Plot ‘zscore’*’RESI’ ‘RESI’*’x3’ ‘RESI’*’x4’;
SUBC> Symbol.
MTB >
-2
-2 -1 0 1 2
RESI
Comment 11:
SUBC> MTB >
TPrediction.
Comment 12:

2
1
Comment 13:
0
Both plot of residuals vs x3 and plot of residuals vs x4 show similar vertical spread of residuals throughout the range of the predictor. These demonstrate that the equal variance assumption is satisfied.
-1
Neither of these plots shows a recognizable function. So, no indication of lack-of-fit.
2
1
0
-1
Scatterplot of RESI vs x3
-2
28 29 30 31 32 33 34 35 36
x3 Scatterplot of RESI vs x4
-2
82 84 86 88 90 92 94 96
x4
RESI RESI