# CS代写 Gibbs sampling – cscodehelp代写

Gibbs sampling

A rather different approach to sampling.

Part of the Markov Chain Monte Carlo (MCMC) family of algorithms. Don’t generate each sample from scratch.

Generate samples by making a random change to the previous sample.

⃝c -Trenn, King’s College London 2

Gibbs sampling

Gibbs sampling for Bayesian networks starts with an arbitrary state.

So pick state with evidence variables fixed at observed values.

(If we know Cloudy “ true, we pick that.)

Generate next state by randomly sampling from a non-evidence variable.

Do this sampling conditional on the current values of the Markov blanket.

“The algorithm therefore wanders randomly around the state space . . . flipping one variable at a time, but keeping the evidence variables fixed”.

⃝c -Trenn, King’s College London 3

Gibbs sampling

Consider the query:

PpCloudy|Sprinker “ true, W etGrass “ trueq The evidence variables are fixed to their observed values.

The non-evidence variables are initialised randomly.

State is thus:

Cloudy “ true Rain “ false

rCloudy “ true, Sprinkler “ true, Rain “ f alse, W etGrass “ trues.

⃝c -Trenn, King’s College London

4

Gibbs sampling

First we sample Cloudy given the current state of its Markov blanket.

Markov blanket is Sprinkler and Rain. So, sample from:

PpCloudy|Sprinkler “ true, Rain “ falseq Suppose we get Cloudy “ false, then new state is:

C SR WG

⃝c -Trenn, King’s College London

5

rCloudy “ false, Sprinkler “ true, Rain “ f alse, W etGrass “ trues.

Gibbs sampling

Next we sample Rain given the current state of its Markov blanket.

Markov blanket is Cloudy, Sprinkler and WetGrass. So, sample from:

C SR WG

PpRain|Cloudy “ false,Sprinkler “ true,WetGrass “ falseq Suppose we get Rain “ true, then new state is:

rCloudy “ false, Sprinkler “ true, Rain “ true, W etGrass “ trues.

⃝c -Trenn, King’s College London

6

Gibbs sampling

Each state visited during this process contributes to our estimate for:

PpCloudy|Sprinkler “ true, W etGrass “ trueq

Say the process visits 80 states. In 20, Cloudy “ true

In 60, Cloudy “ false

Then

PpCloudy|Sprinkler “ true, W etGrass “ trueq “ α ̋ 60

‚“ ̋

‚

⃝c -Trenn, King’s College London

7

̈ ̨ ̈ ̨

20

0.25 0.75

Gibbs sampling

function GIBBS-ASK(X, e, bn, N) returns an estimate of P pX |eq local variables: NrX s, a vector of counts over X, initially zero

Z, the nonevidence variables in bn

x, the current state of the network, initially copied

from e

initialize x with random values for the variables in Z for j = 1 to N do

foreachZi inZdo

samplethevalueofZi inxfromPpZi|mbpZiqq

given the values of MBpZiq in x

Nrx s Ð Nrx s ` 1 where x is the value of X in x

return NORMALIZE(NrX s)

⃝c -Trenn, King’s College London 8

Gibbs sampling

All of this begs the question:

“How do we sample a variable given the state of its Markov blanket?” For a value x of a variable X:

PpX|mbpXqq “ αPpX|parentspXqq where mbpXq is the Markov blanket of X.

ź

Y PChildrenpXq

PpY |parentspY qq

Given PpX|mbpXqq, we can sample from it just as we have before.

⃝c -Trenn, King’s College London 9

To summarise

Bayesian networks exploit conditional independence to create a more compact set of information.

Reasonably efficient computation for some problems. Five approaches to inference in Bayesian networks.

‚ Exact: Inference by enumeration.

‚ Approximate: Prior sampling

‚ Approximate: Rejection sampling

‚ Approximate: Importance sampling/likelihood weighting ‚ Approximate: Gibbs sampling

Can answer a simple query for any BN.

⃝c -Trenn, King’s College London 10