CS代考程序代写 Review of Probability and Statistics

Review of Probability and Statistics
Zhenhao Gong University of Connecticut

Welcome 2
This course is designed to be:
1. Introductory
2. Focusing on the core techniques with the widest applicability
3. Less math, useful, and fun! Most important:
Feel free to ask any questions! ‡
Enjoy! 

Goal 3
Reviews the core ideas of the theory of probability and statistics that are needed for regression analysis and forecasting
􏰀 The probability framework for statistical inference 􏰁 􏰀 Moments, Covariance, and Correlation 􏰁
􏰀 Sampling Distributions and Estimation
􏰀 Hypothesis Testing and Confident interval

Review 4
􏰀 Randomness, random variable
􏰀 Population, population distribution (discrete and
continuous)
􏰀 Moments: mean, variance, skewness and kurtosis
􏰀 Joint distribution and covariance, correlation
􏰀 Conditional distribution, conditional mean

Sampling distribution 5
Sampling distribution
􏰀 Distribution of a sample of data drawn randomly from a unknown population: Y1,··· ,YT ∼ f(y). (Why we need to do this?)
􏰀 Under simple random sampling, Y1, · · · , YT are independently and identically distributed (i.i.d.)
This framework allows rigorous statistical inferences about moments of population distributions, using a sample of data from that population.

Estimation 7
We use the sampling distribution of Y ̄ to do the statistical inference. So this is the key concept.
􏰀 Y ̄ = (Y1 + Y2 + · · · + YT )/T is a random variable
􏰀 The distribution of Y ̄ over different possible samples of size
T is called the sampling distribution of Y ̄
􏰀 The mean and variance of Y are the mean and variance of its sampling distribution, E(Y ̄ ) and Var(Y ̄ )

Estimation 8
The connection of sample moments E(Y ̄ ) and Var(Y ̄ ) and its corresponding population moments E(Y ) = μY and
Var(Y ) = σY2 :
􏰀 mean: E(Y ̄)=E(1 􏰂T Yt)= 1 􏰂T E(Yt)=μY. ̄ nt=1Tt=1
(Y is unbiased estimator.) 􏰀 variance:
2 􏰂Tt=1(Yt −Y ̄)2 ̄ s= T−1 =Var(Y)
TT
1􏰃 1 􏰃 σY2
=Var(T Yt)=T2Var( Yt)= T. t=1 t=1

Magic 9
For small sample sizes, the sampling distribution of Y ̄ is complicated and depends on the distribution of Y , but if T is large, the sampling distribution of Y ̄ becomes simple!
1. As n increases, the distribution of Y ̄ becomes more tightly centered around μY (the Law of Large Numbers)
2. Moreover, the distribution of Y ̄ becomes normal (the Central Limit Theorem):
􏰀 ̄ σ Y2 Y is approximately distributed N(μY ,
)
􏰀 ̄√ ̄T
Standardized Y : Z = T (Y − μY )/σY is approximately
distributed N(0,1) (standard normal)

Example 10
Suppose Y takes on 0 or 1 (a Bernoulli random variable) with the probability distribution,
􏰀 P(Y =1)=p=0.78andP(Y =0)=1−p=0.22
􏰀 E(Y)=μY =p×1+(1−p)×0=0.78
The sampling distribution of Y ̄ depends on T . Consider T = 2,
then the sampling distribution of (Y ̄ ) is
􏰀 P(Y ̄ =0)=0.222 =0.0484
􏰀 P(Y ̄ = 1/2) = 2×0.22×0.78 = 0.3432
􏰀 P(Y ̄ =1)=0.782 =0.6084
􏰀 Distribution of Y ̄ ? Very complicated as T getting larger!!

Test for normality 13
Test whether the normal distribution governs a given sample of data:
􏰀 Check the sample skewness and kurtosis (Sˆ = 0 and Kˆ = 3).
􏰀 The Jarque-Bera (JB) test statistic:
T􏰌ˆ2 1ˆ 2􏰍
JB=6 S+4(K−3) ,
which is a χ2 random variable with two degrees of freedom in large sample, under the null hypothesis of independent normally distributed observations.

Hypothesis testing and confident intervals 14
Hypothesis: yes/no question.
􏰀 Do the average of the daily household spending in U.S.
equal to 20$ in the last year?
􏰀 Are average of the monthly returns the same for two different investment portfolios in last ten years?
Both questions concern about the population distribution of spending and returns. The statistical challenge is to answer these questions based on a sample of evidence.

Hypothesis Testing 15
The hypothesis testing problem for the mean E(Y ):
􏰀 Make a provisional decision based on the evidence at hand whether a null hypothesis is true, or instead that some alternative hypothesis is true. That is, test
H0 :E(Y)=μY,0 vs. H1 :E(Y)̸=μY,0
Remark: we can either rejecting the null hypothesis or failing to do so.

P-value 16
P-value = the probability of drawing a value of Y ̄ that differs from μY,0 by at least as much as Y ̄act, the value actually computed with your data, assuming that the null hypothesis is true.
Calculating the P-value based on Y ̄ :
P-value = PH0 [|Y ̄ − μY,0| > |Y ̄ act − μY,0|],
where Y ̄ act is the value of Y ̄ actually observed (nonrandom).
􏰀 To compute the p-value, we need the to know the sampling
distribution of Y ̄ , which is complicated if n is small.
􏰀 If n is large, we can use the normal approximation (CLT).

Compute P-value 17
When n is large, we know from CLT that the sampling distribution of Y ̄ is N(μ ,σ2 /T).
̄ Y,0 √Y Thus,Z=(Y−μY,0)/(σY/ T)∼N(0,1).
P-value = PH0 [|Y ̄ − μY,0| > |Y ̄ act − μY,0|]
􏰌 􏰉􏰉Y ̄ −μY,0􏰉􏰉 􏰉􏰉Y ̄act −μY,0􏰉􏰉􏰍
= PH0 Z = 􏰉 σ /√T 􏰉 > 􏰉 σ /√T 􏰉 􏰉Y􏰉􏰉Y􏰉
􏰌 􏰉􏰉Y ̄act−μY,0􏰉􏰉􏰍 =2Φ −􏰉 σ /√T 􏰉 ,
􏰉Y􏰉
where Φ is the standard normal cumulative distribution function.


Note: σY ̄ = σY / T

P-value 19
In practice, σY is unknown and it can be estimated by the sample variance of Y :
1 􏰃T
( Y t − Y ̄ ) 2 .
􏰊􏰉􏰉Y ̄−μY,0 􏰉􏰉 􏰉􏰉Y ̄act−μY,0 􏰉􏰉􏰋 act
s 2Y = T − 1
⇒P-value≈PH0 􏰉sY/√T 􏰉>􏰉 sY/√T 􏰉 =PH0(|t|>|t |),
t

where t = Y ̄ −μY,0 the t-statistic, it approximate equal to
sY/ T
standard normal distribution when T is large.

Example 20
Suppose we want to test the null hypothesis that the average of daily household spending in U.S. last five years, E(Y ) = 20$ using a sample of T = 200 for a representative household.
􏰀 Step 1: compute the sample average spending Y ̄act =$22.64.
􏰀 Step 2: compute the sample standard deviation √√
sY = $18.14, so sY / T = $18.14/ 200 = 1.28.
􏰀 Step 3: compute the value of t-statistic
tact = (22.64 − 20)/1.28 = 2.06, so the p-value is 2Φ(−2.06) = 0.039 or 3.9%.
That is, assuming the null hypothesis is true, the probability of obtaining a sample average at least as different from the null as the one actually computed is 3.9%. Reject!

Significant levels 21
We can do the hypothesis test without computing the p-value. Instead, we can use a prespecified significance level. For example, if the prespecified significance level is 5%,
􏰀 we reject the null hypothesis if |tact| > 1.96. 􏰀 equivalently, we reject if p-value ≤ 0.05.
This is often used in empirically studies. The most popular ones are 10%, 5% and 1%, and the corresponding critical values are 1.64, 1.96, and 2.58.

Confidence Interval 22
A 95% confidence interval for μY is an interval that contains the true value of μY in 95% of repeated samples:
􏰎 􏰉􏰉Y ̄act −μY 􏰉􏰉 􏰏 Y ̄act −μY 􏰏 μY :􏰉 s /√T 􏰉≤1.96 ={μY: −1.96≤ s /√T ≤1.96
􏰉Y􏰉Y 􏰎 􏰌 ̄act SY ̄act SY 􏰍􏰏
⇒ μY ∈ Y −1.96√ ,Y +1.96√ . TT
Example, given Y ̄ act = $22.64 and sY /√T = 1.28, the 95% confidence interval for mean hourly earnings is:
(22.64 − 1.96 × 1.28, 22.64 + 1.96 × 1.28) = ($20.13, $25.15).

Leave a Reply

Your email address will not be published. Required fields are marked *