# CS代考 End-of-year Examinations, 2020 – cscodehelp代写

End-of-year Examinations, 2020
STAT314 / STAT461 -20S2 (C)
Family Name First Name Student Number Venue
Seat Number
_____________________ _____________________ |__|__|__|__|__|__|__|__| ____________________ ________
No electronic/communication devices are permitted. No exam materials may be removed from the exam room.
Mathematics and Statistics EXAMINATION
End-of-year Examinations, 2020
STAT314-20S2 (C) Bayesian Inference STAT461-20S2 (C) Bayesian Inference
Examination Duration: 180 minutes
Exam Conditions:
Open Book exam: Students may bring in any written or printed materials. Calculators with a ‘UC’ sticker approved.
Materials Permitted in the Exam Venue:
Open Book exam: Students may bring in any written or printed materials. Materials to be Supplied to Students:
1 x Standard 16-page UC answer book.
Instructions to Students:
Remember to write your name and student number on ALL answer booklets. Start each question on a new page.
This is an open book examination.
Show all working and calculations.
Page 1 of 9

End-of-year Examinations, 2020
STAT314 / STAT461 -20S2 (C)
Questions Start on Page 3
Page 2 of 9

Question 1 (5 marks)
Let v and w be continuous random variables and suppose that the following expression is the unnormalized beta density for one of them (conditionally given the other):
v w (1 − v ) 2 w .
Part (a) (1 mark)
Which random variable, v or w, is the above expression the unnormalized beta density for? Part (b) (2 marks)
What are the parameters of the beta density in the above expression?
Part (c) (2 marks)
What are the supports of v and w?
Question 2. (5 marks)
Evaluate the following integral, �01𝑥𝑥3(1 − 𝑥𝑥)−1⁄2𝑑𝑑𝑥𝑥 ,
using only insights about statistical distributions, without doing integration by parts or using a calculator. Show all of your working clearly and give your answer as a fraction in its simplest (lowest) form.
• Recursive property: For k > 1, Γ(k) = (k−1) Γ(k−1).
• Γ(1) = 1.
Page 3 of 9
STAT314/461-20S2
TURN OVER

Question 3. (5 marks)
iid
Let x1,…,xn |μ,ν~N(μ,ν), where μ is mean and ν is precision. Suppose that we use a prior
distribution for (μ,ν ) such that the resulting posterior distribution can be factorized as 𝒑𝒑(𝝁𝝁, 𝝂𝝂|𝑿𝑿𝒏𝒏) = 𝒑𝒑(𝝁𝝁|𝝂𝝂, 𝑿𝑿𝒏𝒏)𝒑𝒑(𝝂𝝂|𝑿𝑿𝒏𝒏) ,
where 𝑿𝑿𝒏𝒏 = (𝒙𝒙𝟏𝟏,…,𝒙𝒙𝒏𝒏), 𝒑𝒑(𝝁𝝁|𝝂𝝂,𝑿𝑿𝒏𝒏) = 𝑵𝑵(𝝁𝝁|𝒎𝒎∗,𝒔𝒔∗𝝂𝝂), and 𝒑𝒑(𝝂𝝂|𝑿𝑿𝒏𝒏) = 𝒈𝒈𝒈𝒈𝒎𝒎𝒎𝒎𝒈𝒈(𝝂𝝂|𝒈𝒈∗,𝒃𝒃∗), and where 𝒏𝒏m∗ , s∗ , a∗ and b∗ are posterior parameters. Recall that the posterior predictive distribution,
𝒑𝒑(𝒙𝒙�|𝑿𝑿 ), will be a generalized t distribution.
Now suppose that we wish to generate sample values from the posterior predictive distribution, but we do not 𝑛𝑛know how to generate from the generalized t distribution. Describe clearly how we can obtain the required sample values by using an appropriate factorization of the joint posterior, 𝑝𝑝(𝑥𝑥�,𝜇𝜇,𝜈𝜈|𝑋𝑋 ).
Page 4 of 9
STAT314/461-20S2
TURN OVER

Question 4. (15 marks)
iid
Let x1 ,…, xn | p ~ Bernoulli(p), 𝑿𝑿𝒏𝒏 = (𝒙𝒙𝟏𝟏, … , 𝒙𝒙𝒏𝒏), and s =
flat prior density for p, the posterior density for p is f(p|Xn)∝ps(1−p)n−s ,
which is beta(s+1,n−s+1).
Let q = p1 2 in parts (b), (c) and (d) of this question.
Part (a) (3 marks)
Use a Bayesian argument to find the maximum likelihood estimate of p.
Part (b) (2 marks)
Now suppose that we re-parameterize the Bernoulli model using q. Write down the likelihood function for the Bernoulli model with q as the parameter, i.e. what is f ( X n | q) equal to?
Part (c) (5 marks)
Given that with a flat prior density for p, the corresponding prior density for q is, f (q) = 2q for q ∈[0,1] , and 0 otherwise.
Use Bayes’ rule to derive the posterior density, f (q | X n ) .
Is the posterior density, f (q | X n ) , a beta density? Explain clearly why or why not.
Part (d) (5 marks)
Use the density transformation theorem to obtain the posterior density, f (q | X n ) , directly from 𝒇𝒇(𝒑𝒑|𝑿𝑿𝒏𝒏 ).
Page 5 of 9
n
∑ i =1
x . We have seen in class that, using a i
STAT314/461-20S2
TURN OVER

Question 5. (15 marks)
Let 𝑥𝑥1,…,𝑥𝑥𝑛𝑛 be independent and identically distributed random variables that have a uniform distribution on the interval, [0, 𝜃𝜃], i.e. the lower boundary of the interval is zero and the upper boundary, 𝜃𝜃, is unknown. The probability density function is given by
𝑝𝑝(𝑥𝑥𝑖𝑖|𝜃𝜃) = 𝜃𝜃−1𝐼𝐼(𝜃𝜃 ≥ 𝑥𝑥𝑖𝑖),
where 𝐼𝐼(𝑐𝑐𝑐𝑐𝑐𝑐𝑑𝑑𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐) is the indicator function, which returns a value of 1 when condition is satisfied,
and a value of 0 otherwise.
Part (a) (2 marks)
Show that the joint likelihood function for 𝑥𝑥1, … , 𝑥𝑥𝑛𝑛 is
where 𝑥𝑥𝑚𝑚𝑚𝑚𝑚𝑚 = max{𝑥𝑥1, … , 𝑥𝑥𝑛𝑛}. 𝑝𝑝(𝑥𝑥1, … , 𝑥𝑥𝑛𝑛|𝜃𝜃) = 𝜃𝜃−𝑛𝑛𝐼𝐼(𝜃𝜃 ≥ 𝑥𝑥𝑚𝑚𝑚𝑚𝑚𝑚),
Hint: 𝐼𝐼(𝑐𝑐𝑐𝑐𝑐𝑐𝑑𝑑𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐1) × 𝐼𝐼(𝑐𝑐𝑐𝑐𝑐𝑐𝑑𝑑𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐2) = 𝐼𝐼(𝑐𝑐𝑐𝑐𝑐𝑐𝑑𝑑𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐1 𝐴𝐴𝐴𝐴𝐴𝐴 𝑐𝑐𝑐𝑐𝑐𝑐𝑑𝑑𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐2). Part (b) (3 marks)
Consider the following proper prior density for 𝜃𝜃:
𝑝𝑝(𝜃𝜃) = 𝛼𝛼𝛽𝛽𝛼𝛼𝜃𝜃−(𝛼𝛼+1)𝐼𝐼(𝜃𝜃 ≥ 𝛽𝛽), where 𝛼𝛼 > 0 and 𝛽𝛽 > 0 are the hyperparameters.
Derive the resulting posterior density, 𝑝𝑝(𝜃𝜃|𝑥𝑥1, … , 𝑥𝑥𝑛𝑛).
Part (c) (3 marks)
Is the prior density in part (b) a conjugate prior density for 𝜃𝜃? Justify your answer.
If your answer is yes, state how the posterior parameters are updated from the prior parameters, i.e.
Part (d) (2 marks)
𝛼𝛼 →?and𝛽𝛽 →?
Suppose that we know how to generate random values from the posterior distribution, 𝑝𝑝(𝜃𝜃|𝑥𝑥1, … , 𝑥𝑥𝑛𝑛). Use pseudo-code to describe how to generate 𝐴𝐴 random values, 𝑥𝑥�1, … , 𝑥𝑥�𝑁𝑁 , from the posterior predictive distribution, 𝑝𝑝(𝑥𝑥�|𝑥𝑥1, … , 𝑥𝑥𝑛𝑛).
Page 6 of 9
STAT314/461-20S2
TURN OVER

Part (e) (2 marks)
Now consider another prior density for 𝜃𝜃: 𝑝𝑝(𝜃𝜃) ∝ 𝜃𝜃−1.
Show whether this prior density is proper or improper.
Hint: ∫ 𝜃𝜃−1𝑑𝑑𝜃𝜃 = log(𝜃𝜃), where log denotes the natural logarithm.
Part (f) (3 marks)
Derive the posterior density, 𝑝𝑝(𝜃𝜃|𝑥𝑥1, … , 𝑥𝑥𝑛𝑛), corresponding to the prior density in part (e). Explain, without doing any integration, why the resulting posterior density is a proper density.
Page 7 of 9
STAT314/461-20S2
TURN OVER

Question 6. (15 marks)
Suppose we have 𝐾𝐾 independent experiments, where a single scalar observation is made in each experiment. Let the observation in experiment 𝑘𝑘 be 𝑥𝑥𝑘𝑘, with the following distribution:
𝑝𝑝(𝑥𝑥𝑘𝑘|𝜃𝜃𝑘𝑘, 𝜎𝜎𝑘𝑘2) = 𝐴𝐴(𝑥𝑥𝑘𝑘|𝜃𝜃𝑘𝑘, 𝜎𝜎𝑘𝑘2),
for 𝑘𝑘 = 1, … , 𝐾𝐾. Hence, 𝑥𝑥𝑘𝑘 has a normal distribution with mean, 𝜃𝜃𝑘𝑘, and variance, 𝜎𝜎𝑘𝑘2. Suppose that
𝜃𝜃1, … , 𝜃𝜃𝐾𝐾 are unknown, and that 𝜎𝜎12, … , 𝜎𝜎𝐾𝐾2 are assumed to be known.
Consider the following hierarchical model:
𝑝𝑝 ( 𝑥𝑥 1 , … , 𝑥𝑥 𝐾𝐾 | 𝜃𝜃 1 , … , 𝜃𝜃 𝐾𝐾 , 𝜎𝜎 12 , … , 𝜎𝜎 𝐾𝐾2 ) = � 𝐴𝐴 ( 𝑥𝑥 𝑘𝑘 | 𝜃𝜃 𝑘𝑘 , 𝜎𝜎 𝑘𝑘2 ) , 𝐾𝐾 𝑘𝑘=1
𝑝𝑝(𝜇𝜇, 𝜏𝜏) = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐. The resulting posterior distributions are as follows:
where
𝑝𝑝(𝜏𝜏|𝑥𝑥 ,…,𝑥𝑥 ) ∝ 𝜔𝜔�(𝜎𝜎2 +𝜏𝜏2)−1⁄2𝑒𝑒𝑥𝑥𝑝𝑝�− 𝑘𝑘 � 1 𝐾𝐾 𝑘𝑘=1𝑘𝑘 2(𝜎𝜎𝑘𝑘2+𝜏𝜏2)
𝑝𝑝(𝜃𝜃1, … , 𝜃𝜃𝐾𝐾|𝜇𝜇, 𝜏𝜏) = � 𝐴𝐴(𝜃𝜃𝑘𝑘|𝜇𝜇, 𝜏𝜏2), 𝑘𝑘=1
𝑝𝑝(𝜃𝜃𝑘𝑘|𝜇𝜇, 𝜏𝜏, 𝑥𝑥𝑘𝑘) = 𝐴𝐴(𝜃𝜃𝑘𝑘|𝜃𝜃�𝑘𝑘, 𝜈𝜈𝑘𝑘2), for 𝑘𝑘 = 1, … , 𝐾𝐾, 𝑝𝑝(𝜇𝜇|𝜏𝜏,𝑥𝑥1,…,𝑥𝑥𝐾𝐾)=𝐴𝐴(𝜇𝜇|𝜇𝜇�,𝜔𝜔2), (𝑥𝑥 −𝜇𝜇̂)
𝐾𝐾2
𝐾𝐾
STAT314/461-20S2
𝜃𝜃� 𝑘𝑘 = � 𝜎𝜎1 + 𝜏𝜏1 � − 1 � 𝜎𝜎1 𝑥𝑥 𝑘𝑘 + 𝜏𝜏1 𝜇𝜇 � , 𝑘𝑘2 2 𝑘𝑘2 2
𝜈𝜈1 = 𝜎𝜎1 + 𝜏𝜏1 , 𝑘𝑘2 𝑘𝑘2 2
𝐾𝐾 1 −1𝐾𝐾 1
𝜇𝜇̂=��𝜎𝜎𝑘𝑘2+𝜏𝜏2� �𝜎𝜎𝑘𝑘2+𝜏𝜏2𝑥𝑥𝑘𝑘, 𝑘𝑘=1 𝑘𝑘=1
1𝐾𝐾1
𝜔𝜔 2 = � 𝜎𝜎 𝑘𝑘2 + 𝜏𝜏 2 . 𝑘𝑘=1
Page 8 of 9
TURN OVER

Part (a) (2 marks)
Draw a directed acyclic graph for the hierarchical model.
Part (b) (5 marks)
Describe how sample values can be generated from the marginal posterior distribution, 𝑝𝑝(𝜏𝜏|𝑥𝑥 , … , 𝑥𝑥 ), using the method of grid points.
Part (c) (8 marks)
1𝐾𝐾
Now suppose that we have coded a function called, tauPosterior, to generate sample values from 𝑝𝑝(𝜏𝜏|𝑥𝑥 , … , 𝑥𝑥 ). Use pseudo-code to describe how to generate 𝐴𝐴 random values, 𝑥𝑥� , … , 𝑥𝑥� , from
1 𝐾𝐾 𝑘𝑘,1 𝑘𝑘,𝑁𝑁 the posterior predictive distribution, 𝑝𝑝(𝑥𝑥�𝑘𝑘|𝑥𝑥1, … , 𝑥𝑥𝑛𝑛), for experiment 𝑘𝑘.
END OF PAPER
STAT314/461-20S2
Page 9 of 9