# CS代考 Artificial Intelligence for Robotics – cscodehelp代写

Artificial Intelligence for Robotics
Lesson 3: Particle Filters
Three Types of Filters
Particle Filters are a sequence of algorithms for estimating the state of a system. Of the filters we cover in this class, particle filters are both the easiest to program and the most flexible.
Question 1 (State Space):
Which type of state space does each filter use? Continuous or discrete?
Check the appropriate box:
Histogram Filters have a discrete state space, while have a continuous state space.

Question 2 (Belief Modality):
With respect to a belief function, check which distribution ¡ª unimodal or multimodal ¡ª pertains to each filter?
Remember that even though the histogram filters are discrete, they are able to represent multiple bumps. On the other hand the Kalman filter was a single Gaussian, which is by definition unimodal.

Question 3 (Efficiency):
When it comes to scaling and the number of dimensions of the state space, how are grid cells, and/or Gaussians represented for each filter, as a quadratic or an exponential?
The histogram’s biggest disadvantage is that it scales exponentially. This is because any grid that is defined over k-dimensions will end up having exponentially many grid cells in the number of dimensions, which doesn’t allow us to represent high dimensional grids very well. This is sufficient for 3-dimensional robot localization programs, but becomes less useful with higher dimensions. In contrast, the Kalman filter is quadratic. It is fully represented by a vector ¡ª the mean ¡ª and the covariance matrix, which is quadratic. This makes the Kalman filter a lot more efficient because it can operate on a state space with more dimensions.

Question 4 (Exact or Approximate):
Do histogram filters and Kalman filters respectively give approximate or exact solutions?
Answer: Exact or Approximate
Both histogram and Kalman filters are not exact, but are an approximation of the posterior distribution. Histogram filters are approximate because the world is not discrete; localization for example, is an approximate filter. Kalman filters are also approximate, they are only exact for linear systems, however, the world is non-linear.

Particle Filters
Let’s fill in the characteristics of particle filters the same way we did with the histogram and Kalman filters.
In terms of efficiency, the verdict is still out. In certain situations particle filters scale exponentially, and it would be a mistake to represent particle filters over anything more than four dimensions. However, in tracking domains they tend to scale much better.
The key advantage of particle filters is that they are really easy to program. In this class you will write your own particle filter for a continuous value localization problem.
Here is a floor plan of an environment where a robot is located and the robot has to perform global localization. Global localization is when an object has no idea where it is in space and has to find out where it is just based on sensory measurements.
The robot, which is located in the upper
right hand corner of the environment, has
range sensors that are represented by the
blue stripes. The sensors use sonar
sensors, which means sound, to range the
distance of nearby obstacles. These
sensors help the robot determine a good
posterior distribution as to where it is.
What the robot doesn’t know is that it is starting in the middle of a corridor and completely uncertain as to where it is.

In this environment the red dots are particles. They are a discrete guess as to where the robot might be. These particles are structured as an x coordinate, a y coordinate and also a heading direction ¡ª three values to comprise a single guess. However, a single guess is not a filter, but rather it is the set of several thousands of guesses that together generate an approximate representation for the posterior of the robot.
The essence of particle filters is to have the particles guess where the robot might be moving, but also have them survive, a kind of “survival of the fittest,” so that particles that are more consistent with the measurements, are more likely to survive. As a result, places of high probability will collect more particles, and therefore be more representative of the robot’s posterior belief. The particle together, make up the approximate belief of the robot as it localizes itself.
Using Robot Class
Sebastian has written some code that will allow us to make a robot move along the x and y coordinates as well as in the heading direction. Take a minute to familiarize yourself with this code and then see how you can use it.
frommathimport* importrandom
landmarks=[[20.0,20.0], [80.0,80.0],
[20.0,80.0], [80.0,20.0]]
world_size=100.0
classrobot: def__init__(self):
self.x=random.random()*world_size self.y=random.random()*world_size self.orientation=random.random()*2.0*pi self.forward.noise=0.0;
self.turn_noise =0.0; self.sense_noise =0.0;
Call a function ‘robot’ and assign it to a function myrobot
¡ñ [x=10.0 y=10.0 heading=0.0]
printmyrobot

Next, make the robot move:
printmyrobot #thismeanstherobotwillmove10metersforwardandwillnotturn myrobot=myrobot.move(0.0,10.0)
printmyrobot
¡ñ [x=20.0 y=10.0 heading=0.0] Now, make the robot turn:
printmyrobot #thiswillmaketherobotturnbypi/2and10meters myrobot=myrobot.move(pi/2,10.0)
printmyrobot
¡ñ [x=10.0 y=20.0 heading=1.5707]
You can generate measurements with the command sense to give you the distance to the
four landmarks:
printmyrobot #thiswillmaketherobotturnbypi/2and10meters myrobot=myrobot.move(pi/2,10.0)
printmyrobot
printmyrobot.sense()
¡ñ [x=20.0 y=10.0 heading=0.0] [x=10.0 y=20.0 heading=1.5707] [10.0, 92.195444572928878, 60.87625302982199, 70.0]

Robot Class Details
Included in the code that Sebastian has provided are a few functions to take note of:
classrobot: def__init__(self):
self.x=random.random()*world_size self.y=random.random()*world_size self.orientation=random.random()*2.0*pi self.forward.noise=0.0;
self.turn_noise =0.0; self.sense_noise =0.0;
The section of code above shows how the robot assimilates noises, which at this point are all set to zero.
defset(self,new_x,new_y,new_orientation): ifnew_x<0ornew_x>=world_size:
raiseValueError,’Xcoordinateoutofbound’ ifnew_y<0ornew_y>=world_size:
raiseValueError,’Ycoordinateoutofbound’ ifnew_orientation<0ornew_orientation>=2*pi:
raiseValueError,’Orientationmustbein[0..2pi]’ self.x=float(new_x)
self.y=float(new_y) self.orientation=float(new_orientation)
defset_noise(self,new_f_noise,new_t_noise,new_s_noise): #makesitpossibletochangethenoiseparameters #thisisoftenusefulinparticlefilters self.forward_noise=float(new_f_noise); self.turn_noise =float(new_t_noise); self.sense_noise =float(new_s_noise);
The set_noise function above allows you to set noises.
defmeasurement_prob(self,measurement): #calculateshowlikelyameasurementshouldbe #whichisanessentialstep
prob=1.0; foriinrange(len(landmarks)):
dist=sqrt((self.x-landmarks[i][0])**2+(self.y-landmarks[i][1])**2)
prob*=self.Gaussian(dist,self.sens_noise,measurement[i]) returnprob

The function above, measurement_prob, accepts a measurement and tells you how plausible it is. This is a key aspect to the “survival of the fittest” aspect of particle filters. However, we will not use this function until later.
Question 5 (Moving Robot):
Using your interpreter, make a robot that satisfies the following requirements:
After printing senses the first time around we get the following output:
¡ñ [39.0, 46.0, 39.0, 46.0]
After printing sense the second time around we get the following output:
¡ñ [32.0, 53.1, 47.1, 40.3] Answer: Moving Robot
#startsat30.0,50.0,headingnorth(=pi/2) #turnsclockwisebypi/2,moves15meters #senses #turnsclockwisebypi/2,moves10meters #sense
myrobot=robot() myrobot.set(30.0,50.0,pi/2) myrobot=myrobot.move(-pi/2,15.0) printmyrobot.sense()
myrobot=myrobot.move(-pi/2,10.0) printmyrobot.sense()
¡ñ [39.0, 46.0, 39.0, 46.0] [32.0, 53.1, 47.1, 40.3] Question 6 (Add Noise):
The following code has built in noise variables for forward, turn and sense:
classrobot: def__init__(self):
self.x=random.random()*world_size self.y=random.random()*world_size self.orientation=random.random()*2.0*pi self.forward.noise=0.0;
self.turn_noise =0.0; self.sense_noise =0.0;

Further below in the code you can set the noises:
defset_noise(self,new_f_noise,new_t_noise,new_s_noise): #makesitpossibletochangethenoiseparameters #thisisoftenusefulinparticlefilters self.forward_noise=float(new_f_noise); self.turn_noise=float(new_t_noise); self.sense_noise=float(new_s_noise);
In your code, set the values as follows:
#turnsclockwisebypi/2,moves10meters #sense
myrobot=myrobot.move(-pi/2,15.0)
printmyrobot.sense()
myrobot=myrobot.move(-pi/2,10.0) printmyrobot.sense()
Notice that every time you hit run you get different set of values.
Robot World
Now, you can program the robot to turn, move straight after the turn, and sense the distance to four designated landmarks (L1, L2, L3, L4). The distances from the landmarks to the robot make up the measurement vector of the robot. The robot will live in a world of 100×100, so if it falls on one end, then it appears on the other ¡ª it is a cyclic world.

Creating Particles
The particle filter you are going to program maintains a set of 1,000 (N = 1000) random guesses as to where the robot might be, represented by a dot. Each dot is a vector that contains an x-coordinate (38.2), a y-coordinate (12.4), and heading direction (0.18). The heading direction is the angle (in radians) the robot points relative to the x-axis; so as this robot moves forward it will move slightly upwards.
In your code, every time you call the function robot() and assign it to a particle p[i], the elements p[i].x, p[i].y, p[i].orientation (which is the same as heading) are initialized at random.
In order to make a particle set of 1,000 particles, you have to program a separate piece of code that assigns 1,000 of those to a list.
Question 7 (Creating Particles):
Fill in the code so that your results assigns 1,000 particles to a list.
N=1000 p=[] #YourCodeHere printlen(p)
p=[] foriinrange(N):#iteratetheloop1000times
x=robot()#createanobjectcalledrobot
p.append(x)#appendtheobjecttogrowinglistp printlen(p)
If you try to print just p you get 1000 particles, each of which has three values associated
with it, an x-coordinate, a y-coordinate and an orientation (heading direction).
Question 8 (Robot Particles):
Take each particle and simulate robot motion. Each particle should first turn by 0.1 and then move forward 5.0 meters. Go back to your code and make a new set p that is the result of the specific motion, turn by 0.1 and move forward 5.0 meters, to all of the particles in p.

p=[] foriinrange(N):
x=robot() p.append(x)
Here is one possible solution:
N=1000p=[] foriinrange(N):
x=robot() p.append(x)
p2=[] foriinrange(N):#gothroughalltheparticlesagain
#appendtolistp2theresultofmotionappliedtotheiparticle, #chosenfromparticleset
p2.append(p[i].move(0.1,5.0))
If you got this far, you are halfway through! However, the next half is going to be tricky.

Second Half of Particle Filters
The second half of particle filters works like this; suppose you have a robot that sits amid four landmarks and can measure the exact distances to the landmarks. To the right, the image shows the robot’s location and the distances it measures, as well as “measurement noise,” which is modeled as a Gaussian with a mean of zero. This means there is a chance of the measurement being too short or too long, and that probability is governed by a Gaussian.
Now we have a measurement vector that consists of the four values of the four distances from L1 to L4. If a particle hypothesizes that its coordinates are somewhere other than where the robot actually is (the red robot indicates the particle hypothesis), we have the situation shown below.
The particle also hypothesizes a different heading
direction. You can take the measurement vector from our robot and apply it to the particle.
However, this ends up being a very poor measurement vector for the particle. The green indicates the measurement vector we would have predicted if the red particle actually were a good match
for the robot’s actual location.
The closer your particle is to the correct position, the more likely will be the set of measurements given that position. Here is the trick to particle filters; the mismatch of the actual measurement and the predicted measurement leads to an importance weight that tells you how important that specific particle is. The larger the weight the more important it is.

When you have a bunch of particles, each has its own weight; some are very plausible, while others look very implausible as indicated by the size of the particle.
Next we allow the particles to
survive at random, but the
probability of survival will be
proportional to the weights.
That is, a particle with a larger
weight will survive at a higher
proportion than a particle with
a small weight. This means
that after resampling, which is
randomly drawing new
particles from the old ones
with replacement in
proportion to the importance
weight, the particles with a higher importance weight will live on, while the smaller ones will die out. The “with replacement” aspect of this selection method is important because it allows us to choose the high probability particles multiple times. This causes the particles to cluster around regions with high posterior probability.
From here you want to implement a method of setting importance weights, which is related to the likelihood of a measurement and you want to implement a method of resampling that grabs particles in proportion to those weights.

Have a look at this code:
#thisisarandominitializationoftherobot,whichwillreturnarandomoutp
myrobot=robot() myrobot=myrobot.move(0.1,5.0) Z=myrobot.sense()
printmyrobot
Question 9 (Importance Weight):
Program a way to assign importance weights to each of the particles in the list. Make a list of 1,000 elements, where each element in the list contains a number that is proportional to how important the particle is. To make things easier Sebastian has written a function called measurement_probability. This function accepts a single parameter, the measurement vector Z that was just defined, and calculates as an output how likely the measurement is. By using a Gaussian, the function measures how far away the predicted measurements would be from the actual measurements.
defmeasurement_prob(self,measurement): #calculateshowlikelyameasurementshouldbe #whichisanessentialstep
foriinrange(len(landmarks)): dist=sqrt((self.x-landmarks[i][0])**2+(self.y-landmarks[i][1])) prob*=self.Gaussian(dist,self.sense_noise,measurement[i])
returnprob
For this function to run properly, you have to assume that there is measurement noise for the particles, so we need to change our particle generator:
p=[] foriinrange(N):
x=robot() x.set_noise(0.05,0.05,5.0)#thislineensuresparticleshaveacertainamountofno p.append(x)
Once again, please program a list of 1,000 elements in w so that each number in this vector reflects the output of the function measurement_prob applied to the measurement Z. This will help us when we want to resample our particles to create a new set of particles that better match the position of our robot.

w=[] foriinrange(N):
#appendtheoutputofthefunctionmeasurement_prob #totheiparticlewiththeargumentoftheextrameasurement w.append(p[i].measurement_prob(Z))
The results return some outputs that are highly unlikely, with exponents of -146, while others are more likely, for example exponents of
-5 ¡ª these are the particles that are more likely
to survive.
For the final step of the particle filter algorithm, you have to sample particles from p with a probability that is proportional to a corresponding w value. Particles in p that have a large output value should be drawn more frequently than the ones with a smaller value. How hard can that be?
is the trickiest part of programming a particle filter.
generate a new list of particles by letting some of your old particles survive and killing off others. WhenyouaregivenNparticlestoresample,eachofthemwillhavethreevalues(x, y, and orientation) and a weight, w. The weights are continuous values which sum to W.
W = ¡Æ wi i
We can normalize the weights:
¦ÁN =wN. W
The sum of all alphas (the normalized weights) is:
¡Æ ¦Ái = 1 i
Resampling is when you

Resampling puts all the particles and their normalized weights into a big bag. It then draws N new particles with replacement by picking each particle with a probability proportional to the value of it¡¯s ¦Á . For example, let¡¯s pretend we have 5 particles to be resampled with
normalized weights of ¦Á1 ,¦Á2 ,¦Á3,¦Á4, and ¦Á5. The values of ¦Á2 and ¦Á3 are larger than the other 3; first we may draw ¦Á 2 , which becomes p2, and similarly ¦Á 3 might also be large and picked up as p3. By chance you may also pick up small ¦Á 4 , to add p4, and you can also pick the same one again, like ¦Á 2, to have two versions of p2, or maybe even three!
If there are N particles to begin with, you draw N times. In the end, those particles that have a high normalized weight will occur more
frequently in the new set. This is resampling.
Question 10 (Resampling):
During the process of resampling, if you randomly draw a particle in accordance to the normalized importance weights, what is the probability of drawing p1 – p5?