CS计算机代考程序代写 Problem 1: 15 Marks

Problem 1: 15 Marks
Suppose we see excess returns xit on m assets (i = 1,2,··· ,m) over T time periods (t = 1,2,··· ,T). We may write these together as a vector at time t:
 x1t   x2t 
xt= . . .
xmt
Suppose these returns are driven by the following two factor model:
xit = αi + β1if1t + β2if2t + εit. The covariance matrix of f1t and f2t is given by:
􏰉 σ f2 σ f ρ f 􏰊 Ωf=σfρf 1.
Suppose the covariance matrix of εit for all i is given by:
σ12 0 ··· 0
0 σ2 ··· 0 Ψ=. .. . .
.···.. 0 0 · · · σ m2
You may assume that Cov(fkt,εit′) = 0 for any k, i, t and t′, and that both fkt and εit are uncorrelated over time.
(a) Let Σ = Cov(xt) be the covariance of asset returns. Express Σ in terms of Ωf , Ψ and anything else you need (please clearly define any notation you use). (10 marks)
Solution:
where
Σ = BΩf B′ + Ψ  β11 β21 
. . B= . . .
β1m β2m
(b) How many unique parameters are contained in this formulation of Σ. (5 marks) Solution:
There are 3m + 2 parameters in this formulation.
Author: CJH
©Imperial College London 2018/2019

Problem 2: 25 Marks
Suppose we see excess returns xit on m assets (i = 1,2,··· ,m) over T time periods (t = 1,2,··· ,T). We may write these together as a vector at time t:
 x1t   x2t 
xt= . . .
xmt
You may assume that each excess return has mean 0 and that the empirical covariance matrix
ˆˆ
is given by Cov(xt) = Σx. Suppose further that we perform an eigendecomposition to recover
Σˆx = ΓΛΓ′. Here Λ is a diagonal matrix of eigenvalues of Σˆx (with the eigenvalues on the diagonal ordered from largest to smallest), and Γ is a matrix with the eigenvectors of Σˆx as columns.
(a) Define pt, the set of principle components of xt (10 marks) Solution:
pt = Γ′xt (b) State two desirable properties of pt (5 marks)
Solution: The elements of pt are (i) uncorrelated and (ii) ordered by the fraction of the total variance of xt they explain.
(c) Suppose m = 3 and the eigenvalues of Σˆx are 5, 2 and 1. What fraction of the total variance of xt can be explained by the first two principle components? (10 marks)
Solution: 7 8
Author: CJH
©Imperial College London 2018/2019

Problem 3: 25 Marks
Suppose we are interested in predicting some outcome variable yi with a vector of p+1 explanatory variables Xi, where Xi is given by:
1 x1i 
x 
Xi = 2i.  .  
xpi
The matrix containing these Xi for all i is can be written as:
1 x11 x21 ··· xp1 . . . … . 
 X = 1 x1i x2i · · · xpi  .
. . . .. .  . . . . .
1 x1n x2n ··· xpn
You may assume that yi and all xki have been standardized to have mean 0 and variance 1. Consider
the following two minimization problems:1 􏰋np􏰌
min 􏰆(yi −Xi′β)2 +λ􏰆βj2 (1)
β
i=1 j=1
􏰋np􏰌
min 􏰆(yi −Xi′β)2 +λ􏰆|βj| . (2)
β
i=1 j=1
(a) Suppose we set λ = 0 in each of the above. Please provide βˆ that solves the minimization problems (1) and (2). (5 marks)
Solution: When λ = 0 both (1) and (2) are equivalent to ordinary least squares. βˆ = (X′X)−1X′y. Where y is a column vector containing yi for all i.
(b) Describe generally in a sentence or two why we might we be interested in the solution to either (1) or (2) with λ > 0. Additionally, describe at least one advantage of (2) over (1). (10 marks)
Solution: λ > 0 imposes a penalization on “large” values of the parameters βj that helps to prevent overfitting of the model. (1) Shows the minimization underlying Ridge regression, while (2) describes LASSO. (2) may be advantageous as it will explicitly set βj = 0, providing an element of model selection alongside regularization.
(c) Suppose a researcher sets λ = 0 and estimates parameters for p = 1500 different explanatory variables. The researcher finds that this explains 99.1% of the variation in the data used to
β0
β1
1Here Xi′ denotes the transpose of Xi and β =  .  . Note that the summations λ 􏰅pj=1 βj2 and λ 􏰅pj=1 |βj | in
 .  βp
equations (1) and (2) begin at j=1 and hence do not include the term β0.
Author: CJH
©Imperial College London 2018/2019

estimate the parameters. As a result, the researcher claims that they will be able to almost perfectly predict yi out of sample. Discuss this claim (a few sentences or short paragraph should be sufficient). (10 marks)
Solution: A model that estimates such a large number of coefficients via OLS is very likely to have high prediction error due to overfitting. In-sample fit does not guarantee out of sample fit.
Author: CJH
©Imperial College London 2018/2019

Problem 4: 25 Points
Suppose we are interested in estimating the coefficient β1 in the following linear model: Yi = β0 + β1Xi + vi.
Suppose that corr(Xi, vi) ̸= 0, but that we have access to an instrumental variable Zi. (a) List two necessary conditions for Zi to be a valid instrument (5 marks)
Two standard necessary conditions on Zi:
Corr(Zi, vi) = 0
and
(b) Suppose we see 5 observations of Xi, Zi, and Yi, shown in the table below:
Yi Xi Zi 3 -3 0 821 461 0 -2 1 310
Calculate βˆiv, the instrumental variables estimate of β 11
given the data above using an estimator
Corr(Zi, Xi) ̸= 0
of your choice. (20 marks)
Solution: The simplest approach in this calculation is to use the Wald estimator. Because Zi
is binary, this requires calculating four conditional means:
E[Yi|Zi = 0] = 3
E[Yi|Zi = 1] = 4 E[Xi||Zi = 0] = −1 E[Xi||Zi = 1] = 2
The iv estimimates are simply the ratio of the difference between the conditional means for Xi and Yi:
βˆiv = E[Yi|Zi =1]−E[Yi|Zi =0] = 1 1 E[Xi|Zi =1]−E[Xi|Zi =0] 3
Of course, some students may take different approaches (e.g. explicitly calculating via two stage least squares). Any method is acceptable provide the correct result is attained.
Author: CJH
©Imperial College London 2018/2019

Problem 5: 10 Marks
Consider again the data given in Problem 4:
Yi Xi Zi 3 -3 0 821 461 0 -2 1 310
Suppose we were interested in non-parametrically estimating h(X) = E[Y |X] using the nearest neighbors approach. Use the above to do so at the point X = 0 using the 3 nearest neighbors. (10 marks)
Solution: The three nearest neighbors to the point X = 0 are 1, 2, and -2. Therefore hˆ(0) = 11/3.
Author: CJH
©Imperial College London 2018/2019

Leave a Reply

Your email address will not be published. Required fields are marked *