COMP 424 – Artificial Intelligence Bayesian Networks

Instructor: Jackie CK Cheung and Readings: R&N Ch 14

Describing the World Probabilistically

Copyright By cscodehelp代写 加微信 cscodehelp

Naïve Bayes Model

• A common assumption in early diagnosis is that the symptoms are independent of each other given the disease.

• Let s1, .., sn be the symptoms exhibited by a patient (e.g. fever, headache, etc.). Let D be the patient’s disease.

• Using the Naive Bayes assumption:

P(D, s1, …, sn) = P(D) P(s1 | D) … P(sn | D)

Sympt om 1

Diagno sis

Bayesian networks

• Bayesian networks represent conditional independence relationships in a systematic way using a graphical model.

• Specify conditional independencies using graph structure. • Graphical model = graph structure + parameters.

Bayesian networks – Basics

• Nodes are random variables

• Edges specify dependency between random variables

• E.g., B: bronchitis, C: cough, F: fever (binary random variables)

• Edges specify that B directly influences probability of C, F.

• This results in conditional probability distributions:

Semantics of network structure

Bayesian networks, formally speaking

Network structure and conditional independence

Network structure and conditional independence

Network structure and conditional independence

B=patient has bronchitis, F=patient has fever, C=patient has cough

In above graph, C ⊥ F | B, but not C ⊥ F

C is “conditionally independent” of F, given B.

Example 2 (from Poole and Mackworth)

• The agent receives a report that everyone is leaving a

building and it must decide whether there is a fire in the

• The report sensor is noisy (for eg. human error or mischief).

• The fire alarm going off can cause everyone to leave but it’s not

always the case (for eg. everyone is in the middle of an exciting

• The fire alarm usually goes off when there is a fire but the alarm

could have been tampered with.

• A fire also causes smoke to come out from the building.

Question: Is there a fire? Should the agent alert the fire brigade?

Constructing Belief Nets: Variables

Variables and domains:

• Tampering = true when there is tampering with the alarm.

• Fire = true when there is a fire.

• Alarm = true when the alarm sounds.

• Smoke = true when there is smoke.

• Leaving = true if there is an exodus of people.

• Report = true if there is a report given by someone of people

Are there any independence relationships between these variables?

Constructing Belief Nets: Structure

How many parameters?

Consider the variables in the order of causality:

• Fire is independent of Tampering.

• Alarm depends on both Fire and Tampering.

• Smoke depends only on Fire. It is conditionally independent of Tampering and

Alarm given whether there is a Fire.

• Leaving only depends on Alarm and not directly on Fire or Tampering or

• Report depends directly only on Leaving.

The network topology expresses the conditional independencies above.

Constructing Belief Nets: Structure

+1T F+1 +4A S+2

How many parameters?

= 12 are sufficient

(24 incl. complements.)

Consider the variables in the order of causality:

• Fire is independent of Tampering.

• Alarm depends on both Fire and Tampering.

• Smoke depends only on Fire. It is conditionally independent of Tampering and

Alarm given whether there is a Fire.

• Leaving only depends on Alarm and not directly on Fire or Tampering or

• Report depends directly only on Leaving.

The network topology expresses the conditional independencies above.

Constructing Belief Nets: CPDs

P(A| T, F)

Causality and Bayes Net Structure

• Directionality of edges should ideally specify causality, but

this doesn’t necessarily have to be the case.

• E.g., fire and tampering cause alarm

• Also we may not know direction of causality!

• Another graph structure (and corresponding CPTs) can produce the same joint probabilities!

• But not following causality usually results in more model parameters.

Inference in Bayes Nets

• What’s the point of all this? Answer questions about state of the world!

• Find joint probability distribution

• Answer questions using conditional probabilities • Determine causes

• Find explanations

• Use probability rules to figure out answers!

• Key operations:

• Rewrite joint probabilities as conditional probabilities

• Marginalize out variables

Inference in BNs: Joint prob.

Full joint distribution: P(Tampering, Fire, Alarm, Smoke, Leaving, Report) ?? Use structure to solve this!

= P(Tampering) X P(Fire) X P(Alarm | Tampering, Fire) X P(Smoke | Fire) X P(Leaving | Alarm) X P(Report | Leaving)

= 0.02 x 0.01 x 0.5 x 0.9 x 0.88 x 0.75

P(A| T, F)

Inference in BNs: Joint prob.

Full joint distribution: P(~T, F, A, S, L, ~R) ??

P(A| T, F)

Inference in BNs: Joint prob.

Full joint distribution: P(~T, F, A, S, L, ~R) ?? = P(~T) X P(F) X P(A | ~T, F) X P(S | F)

X P(L | A) X P(~R | L)

= 0.98 x 0.01 x 0.99 x 0.9 x 0.88 x 0.25

P(A| T, F)

Inference in BNs: Marginal prob.

Marginal probabilities: Eg. Prob of getting a report P(R) ??

P(A| T, F)

Inference in BNs: Marginal prob.

Marginal probabilities: Eg. Prob of getting a report P(R) ??

= P(R = 1) = ∑t,f,a,s,l P(T=t, F=f, A=a, S=s, L=l, R = 1)

= ∑t,f,a,s,l P(T) P(F) P(A|T,F) P(S|F) P(L|A) P(R=1|L)

Sum over domain of marginalized vars: T={0,1}, F={0,1}, A={0,1}, S={0,1}, L={0, 1}

P(A| T, F)

Inference in BNs: Causal reasoning

Causal reasoning: Eg. Prob of receiving a report in case of fire, P(R | F) ??

P(A| T, F)

Inference in BNs: Causal reasoning

Causal reasoning: Eg. Prob of receiving a report in case of fire, P(R | F) ?? P(R = 1 | F = 1) = P(R = 1, F = 1) / P(F = 1)

= ∑t,a,s,l P(T=t,F=1,A=a,S=s,L=l, R=1) / ∑t,a,s,l,r P(T=t,F=1,A=a,S=s,L=l,R=r)

P(A| T, F)

Inference in BNs: Explanations

P(A| T, F)

Evidential reasoning or explanation.

• Suppose agent receives a report.

• Prob that there is a fire?

• Prob that there is tampering?

Inference in BNs: Explanations

• Suppose agent receives a report.

• Prob that there is a fire? P(F | R) = P(R , F) / P(R) = P(R | F) P(F) / P(R)

• Prob that there is tampering? P(T | R) = P(T ,R) / P(R)

• Suppose agent sees smoke instead.

• Prob that there is a fire?

• Prob that there is tampering?

P(A| T, F)

Evidential reasoning or explanation.

Inference in BNs: Explanations

• Suppose agent receives a report.

• Prob that there is a fire? P(F | R) = P(R , F) / P(R) = P(R | F) P(F) / P(R)

• Prob that there is tampering? P(T | R) = P(T ,R) / P(R)

• Suppose agent sees smoke instead.

• Prob that there is a fire? P(F | S) = P(F , S) / P(S)

• Prob that there is tampering? P(T | S) = P(T , S) / P(S)

P(A| T, F)

Evidential reasoning or explanation.

Inference in BNs: Explanations

Evidential reasoning or explanation. Compare posteriors with priors!

• Suppose agent receives a report.

• Prob that there is a fire? P(F | R) = 0.2305

• Prob that there is tampering? P(T | R) = 0.399

• Suppose agent sees smoke instead.

• Prob that there is a fire? P(F | S) = 0.476

• Prob that there is tampering? P(T | S) = 0.02

P(A| T, F)

Inference in BNs: Explanations

• Suppose agent receives a report and sees smoke.

P(A| T, F)

Evidential reasoning or explanation.

Inference in BNs: Explanations

• Suppose agent receives a report and sees smoke.

• Prob that there is a fire? P(F | R, S) = P(F, R, S) / P(R , S) = 0.964

• Prob that there is tampering? P(T | R, S) = P(T ,R, S) / P(R, S) = 0.0286

P(A| T, F)

Evidential reasoning or explanation.

Inference in BNs: Explanations

• Suppose agent receives a report and sees smoke.

• Prob that there is a fire? P(F | R, S) = P(F, R, S) / P(R , S) = 0.964

• Prob that there is tampering? P(T | R, S) = P(T ,R, S) / P(R, S) = 0.0286

Compare to: P(F | R) = 0.2305 and P(T | R) = 0.399. This is called explaining away. P(F | R, ~S) = 0.0294 and P(T | R, ~S) = 0.501.

P(A| T, F)

Evidential reasoning or explanation.

Types of queries for graphical models

Inference in BNs: MAP queries

Calculating the MAP from the posteriors.

• Suppose agent receives a report.

• Prob that there is a fire? P(F | R) = 0.2305.

What’s the MAP?

P(A| T, F)

Inference in BNs: MAP queries

Calculating the MAP from the posteriors.

• Suppose agent receives a report.

• Prob that there is a fire? P(F | R) = 0.2305.

• Prob that there is tampering? P(T | R) = 0.399

F = 0 MAP?

P(A| T, F)

Inference in BNs: MAP queries

Calculating the MAP from the posteriors.

• Suppose agent receives a report.

• Prob that there is a fire? P(F | R) = 0.2305.

• Prob that there is tampering? P(T | R) = 0.399

• Suppose agent sees smoke AND receives a report.

• Prob that there is a fire? P(F | R, S) MAP?

F = 0 T = 0

P(A| T, F)

Other examples of MAP queries

• In speech recognition:

• given a speech signal

• determine sequence of words most likely to have generated signal.

• In medical diagnosis:

• given a patient

• determine the most probable diagnosis.

• In robotics:

• given sensor readings

• determine the most probably location of the robot.

Complexity of inference in Bayes Nets

• Given a Bayes net and a random variable X, deciding whether P(X=x) > 0 is NP-hard.

• Bad news:

No general inference procedure that will work efficiently

for all network configurations.

• Good news:

For particular families of networks, inference can be done efficiently. E.g. tree structured graphs, including Naïve Bayes Model!

Naïve Bayes model

• A common assumption in early diagnosis is that the symptoms are independent of each other given the disease.

• Let s1, .., sn be the symptoms exhibited by a patient (e.g. fever, headache, etc.). Let D be the patient’s disease.

• Using the Naive Bayes assumption:

P(D, s1, …, sn) = P(D) P(s1 | D) … P(sn | D)

Sympt om 1

Diagno sis

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com