CS代考计算机代写 Econometrics (M.Sc.) 1 Exercises (with Solutions) Chapter 5

Econometrics (M.Sc.) 1 Exercises (with Solutions) Chapter 5
1. Problem
Review of rules for expectations and probability limits: Let βˆ1n and βˆ2n be two estimators. What applies to the expectations (for fix, finite n) and probability limits (for n → ∞) of
(a) βˆ1 +βˆ2,
(b) cβˆ1, where c is a constant,
(c) βˆ1 ·βˆ2,
(d) βˆ1/βˆ2,
(e) f(βˆ1), where f is a continuous function?
You may assume throughout the exercise that the probability limits of βˆ1 and βˆ2 exist and that all assumptions from Chapter 5 hold.
E[βˆ1 + βˆ2] = E[βˆ1] + E[βˆ2] E[c · βˆ1] = c · E[βˆ1]
Probability limits
plim(βˆ1 + βˆ2) = plim(βˆ1) + plim(βˆ2) plim(c · βˆ1) = c · plim(βˆ1) plim(βˆ1 · βˆ2) = plim(βˆ1) · plim(βˆ2)
plim(βˆ ) 1
only if βˆ1 and βˆ2 are uncorrelated
E[βˆ1 · βˆ2] = E[βˆ1] · E[βˆ2] 􏱧βˆ 􏱨 E[βˆ ]
in general
E[f(βˆ1)] ̸= f(E[βˆ1])
in general; if f(x) is concave E[f(βˆ1)] ≤ f(E[βˆ1]) (Jensen’s inequality)
2. Problem
E 1 ̸= 1 βˆ2 E [βˆ2 ]
􏱥βˆ 􏱦 plim 1 =
if plim(βˆ2) ̸= 0
plim(βˆ2 )
plim(f(βˆ1)) = f(plim(βˆ1)) (Continuous mapping theorem)
(a) Correct or false: From asymptotic theory, we learn that – under appropriate conditions – the error term in a regression model will be approximately normally distributed if the sample size is sufficiently large. Explain your answer.
(b) In the lecture it was shown that strict exogeneity E[ε|X] = 0 is necessary for the unbiasedness of the OLS estimator. Is strict exogeneity also necessary for showing (weak) consistency of the OLS estimator?

Econometrics (M.Sc.) 2
(a) This statement is false. Asymptotic theory tells us (among others) something about the distribution, for instance, of the OLS-estimators – not about the involved error terms themselves. Clearly, if the error terms have a certain distribution, for example uniform over the interval [−1, 1] (to make sure that they have zero mean), they will have this distribution for all n as n → ∞. However, the (appropriately scaled) OLS-estimator will converge (under certain assumptions) in distribution to a multivariate normal distribu- tion as n → ∞ (central limit theorem) even though it may not be normally distributed for finite/small n.
(b) No. In order to show that the OLS estimator is a (weakly) consistent estimator, we actually only needed that E(Xikεi) = 0 for all k = 1, . . . , K; i.e. that the explana- tory variables are orthogonal to the error terms. (This is a weaker assumption than E[ε|X] = 0.)
3. Problem
Let Yi = Xi′β + εi such that the classical assumptions from Chapter 5 hold, and consider the so-called ridge regression estimator
β ̃ = (X′X + λIK)−1 X′Y
where λ > 0 is a constant.1 Is β ̃ a (weakly) consistent estimator for β?
We have that
β ̃ = (X′X + λIK)−1 X′Y
= (X ′ X + λIK )−1 X ′ X β + (X ′ X + λIK )−1 X ′ ε .
􏱟 􏱞􏱝 􏱠􏱟 􏱞􏱝 􏱠
first term second term In the following, we examine the two terms separately.
For the first term we have that
′ −1′􏱣1′λ􏱤−11′ (XX+λIK) XXβ= nXX+nIK nXXβ
→p SX′XSX′Xβ=β

1 X ′ X →p SX ′ X : By Kolmogorov’s strong LLN we have element-wise almost sure n
convergence 1 [X′X]kl →a.s. [SX′X]kl for all k,l = 1,…,K. The assumptions of Kol- n
Obviously λ IK → 0 as n → ∞. Note: deterministic convergence fulfills the definition of n
convergence in probability. That is, we also have that λ IK →p 0 as n → ∞. (Usually, n
nobody writes it like that, but this shall make is easier for you to see that one can apply the CMT (Continuous Mapping Theorem)).
By the CMT (Continuous Mapping Theorem) we have that 􏱡 1 X′X + λ I 􏱢−1 → S−1. nnKpx
mogorov’s strong LLN (i.i.d. and existing first moment E([X′X]kl) = [SX′X]kl) are fulfilled due to our Assumption 1⋆. Element-wise almost sure convergence implies matrix-valued almost sure convergence. Almost sure convergence, implies conver- gence in probability.
􏱡1′λ􏱢−11′ −1 •Likewise,bytheCMT,wehavethat nXX+nIK nXXβ→pSX′XSX′Xβ=β.
For the second term we have that
′ −1′ 􏱣1′ λ􏱤−11′ (XX+λIK) Xε= nXX+nIK nXε
−1 →pSX′X0= 0
(K ×1) 1This estimator can improve the estimation when X′X is nearly singular.

Econometrics (M.Sc.) 3 Justifications:
• 1 X′ε →p 0: By Kolmogorov’s strong LLN we have element-wise almost sure conver- n
gence 1 [X ′ ε]k →a.s. 0 for all k = 1, . . . , K . The assumptions of Kolmogorov’s strong n
LLN (i.i.d. and existing first moment E([X′ε]k) = 0) are fulfilled due to our Assumption 1⋆ and 2. Element-wise almost sure convergence implies vector-valued almost sure convergence. Almost sure convergence, implies convergence in probability.
• The result for the second term follows then by the CMT.
All together: Again by the CMT we can combine the results for the first and the second term which allows us to conclude that β ̃ →p β; i.e., the ridge estimator is indeed a consistent estimator.

Leave a Reply

Your email address will not be published. Required fields are marked *