# CS代考 6CCS3AIN, Tutorial 03 Answers (Version 1.0) 1. The Bayesian network is: – cscodehelp代写

6CCS3AIN, Tutorial 03 Answers (Version 1.0) 1. The Bayesian network is:
Smoking
Cancer
We are told that:
P smoking) = 0.2 P (cancer|smoking) = 0.6
P (badBreath|smoking) = 0.95 Now, the Naive Bayes model tells us that:
so:
2. The Bayesian network is:
LateStart
FailProject
3. With no network, 5 variables need:
numbers. With the network, we need:
P(Cause, Effect1, . . . , Effectn) = P(Cause) 􏰋 P(Effecti|Cause) i
P (smoking, cancer, badBreath) = P (smoking) · P (cancer|smoking) · P (badBreath|smoking) = 0.2·0.6·0.95
= 0.114
from 1 − P (t|m).
5. From the slides we have:
25 − 1 = 31
1 + 2 + 2 + 4 + 2 = 11
4. To get the joint probability over the set of variables in the network we apply the chain rule as on the slides. We have:
P (¬m, ¬t, ¬h, ¬s, ¬c) = P (¬h|¬t) · P (¬c|¬s, ¬t) · P (¬t|¬m) · P (¬s|¬m) · P (¬m) = 0.9·0.8·0.9·0.99·0.3
= 0.192456
Note that the 0.15 in the second line above is computed from 1 − P (c|s, ¬t). Similarly 0.3 is computed
P(M|h,s) = P(M,h,s) P(h,s)
= αP(M, h, s)
= α􏰄􏰄P(M,h,s,t,c)
tc
1

Factorising as in Q4, we then have:
Let’s say that:
P(M |h, s) = α 􏰄 􏰄 P (h|t) · P (c|s, t) · P(t|M ) ⊙ P(s|M ) ⊙ P(M ) tc
= αP(M ) ⊙ P(s|M ) ⊙ 􏰄 􏰄 P (h|t) · P (c|s, t) · P(t|M ) tc
􏰂 P(m)·P(s|m)􏰊t 􏰊c P(h|t)·P(c|s,t)·P(t|m) 􏰃 =α P(¬m)·P(s|¬m)􏰊t􏰊cP(h|t)·P(c|s,t)·P(t|¬m)
pm = P (m) · P (s|m) 􏰄 􏰄 P (h|t) · P (c|s, t) · P (t|m) tc
= P (m) · P (s|m) (P (h|t) · P (t|m) · P (c|s, t) + P (h|t) · P (t|m) · P (¬c|s, t)+ P(h|¬t)·P(¬t|m)·P(c|s,¬t)+P(h|¬t)·P(¬t|m)·P(¬c|s,¬t))
= 0.1·0.8(0.9·0.7·0.95+0.9·0.7·0.05+0.7·0.3·0.85+0.7·0.3·0.15) = 0.0672
Similarly, let:
pm′ =P(¬m)·P(s|¬m)􏰄􏰄P(h|t)·P(c|s,t)·P(t|¬m) tc
= 0.9·0.2(0.9·0.1·0.95+0.9·0.1·0.05+0.7·0.9·0.85+0.7·0.9·0.15) = 0.1296
Now:
P(M|h,s) = α 6. To compute the first sample:
􏰂 pm 􏰃 pm′
􏰂0.0672􏰃
0.1296 = 0.66
= α
􏰂0.34􏰃
• Sample m.
P (m) is 0.1, so we want to generate m one time in 10 and ¬m 9 times in 10. If we generate a random
number with equal probability of picking any number between 0 and 1, then that number will be less than or equal to 0.1 1 time in 10, and greater than 0.1 9 times in 10.
So, our method for picking whether m is true or false is to compare a random number (picked between 0 and 1, inclusive, with equal probability for everry number in that range) with P(m). If the random number is less than or equal to P(m), then m, is true. Otherwise m is false.
Our first random number is 0.14, so m is false.
• Sample s.
Given than we have ¬m, we now sample s given ¬m. P(s|¬m) is 0.2. Our random number is 0.57. So s is false.
• Sample t.
Similarly to the previous case, P(t|¬m) = 0.1, and our random number is 0.01, less than 0.1 So t is true.
• Sample c.
We sample c given ¬s and t. P (c|¬s, t) = 0.85 and the next random number is 0.43, so c is true.
• Sample h.
Since t is true, we need to sample given t. P (h|t) = 0.9, our random number is 0.59, so h is true.
2

So we have a sample 􏰈¬m, ¬s, t, h, c􏰉.
Following the same procedure again we get, in turn, the samples:
􏰈¬m, s, ¬t, ¬c, h􏰉 􏰈¬m, ¬s, ¬t, ¬c, ¬h􏰉 􏰈¬m, ¬s, ¬t, ¬c, h,􏰉 􏰈¬m, ¬s, ¬t, ¬c, ¬h􏰉
Therefore we estimate Pˆ(¬m, ¬t, ¬h, ¬s, ¬c) = 2/5, where we use the Pˆ to denote it’s an estiamte.
The probabilities only become accurate after many iterations (as the number of iterations approaches infinity,
the probability approaches the correct value).
7. Since we are computing P(m|h,s) we will only use samples in which h and s are true. Using rejection sampling we proceed as before. We only need to consider the events where h and s both hold. There is only one. In this one m does not hold and so we estimate Pˆ(m|h, s) = 0.
Again, this is approximate, and will improve with more samples. Indeed, if you complete the 5 samples that the question asks for, you may get a better approximation.
8. For likelihood sampling, we start by picking an order in which we will evaluate the variables. We will use the same order as before.
Then, starting at the beginning of the list of random numbers
• Sample m
m is false. (We use the 0.14 to get this — we are starting over with the random numbers again.).
• s is true by definition.
w is set to the value of P(s|¬m), so w is 0.2.
• Sample t
t is true.(We use 0.01 to get this.)
• h is true by definition.
Thus we update w with P(h|t) = 0.9. w = 0.2 · 0.9 = 0.18.
• Sample for c. c is true.
So we have the sample 􏰈¬m, s, t, h, c􏰉 and the weight is 0.18.
For the second sample: 􏰈¬m, s, ¬t, ¬c, h􏰉 and the weight is 0.2 · 0.7 = 0.14. For the third sample: 􏰈¬m, s, ¬t, c, h􏰉 and the weight is 0.2 · 0.7 = 0.14. For the fourth sample: 􏰈¬m, s, ¬t, ¬c, h􏰉 and the weight is 0.2 · 0.7 = 0.14. For the fifth sample: 􏰈¬m, s, ¬t, ¬c, h􏰉 and the weight is 0.2 · 0.7 = 0.14. So:
ˆ 􏰂 0.18 􏰃 􏰂0.243􏰃 P(T|h,s)=α 4·0.14 ≈ 0.757
Again I stress that these values are very approximate with so few samples. The values were very different after several thousand samples.
9. Again, I’m not going to post a solution for this optional bit of the tutorial, but if you did it, you can check the correctness of your solution against the value for P (m|h, s) in question 5.
3