COSC1127/1125 Artificial Intelligence

School of Computing Technologies Semester 2, 2021

Prof. ̃a

Tutorial No. 2 Uninformed and Informed Search

This is a large tutorial sheet covering the search techniques. However, much of the first part overlaps or is a refresh of techniques or analysis you have done in Algorithms and Analysis, and there is also a practical lab using a web-based visualization tool. This tute sheet is very important for both your projects as well as for the THE assessments, so make sure you cover it well!

1 Uninformed Search

PART1: Thejugproblem…………………………………………………………………. From Tutorial 1. Consider this problem: We have one 3 litre jug, one 5 litre jug and an unlimited supply of water. The goal is to get exactly one litre of water into either jug. Either jug can be emptied or filled, or poured into the other.

(a) For this problem give:

i. An appropriate data structure for representing a state.

ii. The initial state.

iii. The final states.

iv. A specification of the operators (or actions) which includes the preconditions that must be satisfied before the operator can be used and the new state generated.

v. What is the solution to the problem.

Solution: See solution to tutorial 1.

(b) In the previous exercise, a representation for states and the full state space were developed. For the same problem, apply search strategies and note,

• The order in which nodes are created in memory.

• The nodes that are not created in memory at all. for the following search strategies:

i. Breadth first search with no checking for duplicate states.

Solution:

1

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 2 of 11

(0, 0)

(3, 0)

(0, 5)

(0,0) (0,3) (3,5) (0,0) (3,2) (3,5)

(3,0) (0,5) (0,0) (3,0) (3,3) ··· ··· ··· ···

Basically you need you generate every possible state from a parent state, and you do this state by state (from left to right), and by level (from top down). Note that operators applied are not shown over each arc, but you should do so normally.

ii. Breadth first search with checking for duplicate states.

Solution: Your turn!

iii. Depth first search with no checking for duplicate states.

Solution: Your turn!

iv. Depth first search with checking for duplicate states.

Solution:

(0, 0)

(0, 3)

(3, 3)

(1, 5)

(1, 0)

(0, 1)

(3, 5)

(3, 0)

(0, 5)

v. Iterative deepening with no checking for duplicate states.

Solution: Step 1:

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise1continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 3 of 11

Step 2:

Step 3:

(0, 0) (0, 0)

(3, 0) (0, 5) (0, 0)

(3, 0)

(0, 5)

(0,0) (0,3) (3,5) (0,0) (3,2) (3,5) This goes for as many more steps as is required to find the goal.

vi. Iterative deepening with checking for duplicate states.

Solution: Your turn! It is very similar to (v). vii. Is bi-directional search possible for this problem?

PART2: Searchmethods…………………………………………………………………..

(a) Consider the problem of getting from Arad to Bucharest in Romania from Tutorial 1. Provide the part of the search space that is realized in memory and the order of node expansion if uniform cost search is used.1

1When material is taken from the course AIMA book, the Figure/Table caption will be included for reference.

Solution: Yes, it is possible to use bi-directional search, since you know both the initial and goal states. Bi-directional search basically means that you expand nodes from the initial and goal nodes simultaneously (with or without checking for duplicates), until the two search trees are connected to form a single path from the initial to the goal state. One caveat to this is that, without being given the full state space, it can be difficult to search backwards as new transitions (eg. ‘unfill’, ‘unempty’ and ‘unpour’) need to be defined which are not all that intuitive.

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise2continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 4 of 11

Solution: (Note: this is a partial solution)

1

(A, 0) 342

(T , 118) (S, 140) 76

(Z, 75) 85

(L, 229)

(R, 220)

….

13

….

9

12

….

11

(C, 366)

10

(P, 317)

….

….

(D, 486) (C, 455) (B, 418)

Cities are represented by their first letter and the red numbers indicate the order in which nodes are expanded.

(b) What are the dimensions we judge the various search algorithms? Discuss each of them for each algo- rithm.

PART3: SearchLab……………………………………………………………………… RMITAI2021(Semester2)-SebastianSardin ̃a Exercise3continuesonthenextpage…

Solution:

Dimensions: Time required, space required, completeness, and optimality.

All search algorithms are worst-case exponential in time, but they vary across the other properties! Check the book for details and meaning of each of these dimensions.

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 5 of 11

Next, let us our hands dirty by doing this fun practical lab. Objectives:

1. To reinforce your understanding of basic search methods.

2. To acquaint you with the operation and presentation of the AI-Search software.

(a) Preparation — N-Puzzle

1. In Google Chrome, browse to the following address and open the AI-Search Software (2015):

http://www.cs.rmit.edu.au/AI-Search/2015/

2. This activity will provide some background and basic instructions for using the AI-Search Software. The AI-Search software should load and begin running (in a separate window) automatically.

3. Consider the role and application of search in Artificial Intelligence. In particular, you have to con- sider the Depth-First and Breadth-First search methods.

(b) Depth-First Search Algorithm

1. Set the following 8-Puzzle problem using options from the Problem menu.

START BOARD: GOAL BOARD:

023 123 145 845 876 076

2. In the Search algorithm dropdown, choose Depth-First Search.

3. In the Control mode dropdown, choose Single-step.

4. Click the Start button to begin the search process.

5. Use Next and Back buttons to step through this problem. Watching and verifying the algorithm text

coincides with the algorithm animation.

6. When the goal is reached, the solution path will be highlighted. Take notes of the length of the

solution path, and the number of nodes in the open and closed list.

7. Click the Reset button.

8. Update the initial state and goal state as follows:

START BOARD: GOAL BOARD:

023 103 145 825 876 746

9. Repeat steps 5 to 7 with the new configuration.

10. IntheControlmodedropdown,chooseBurstmodetoseeavisualoverviewofthesearchspacetree

constructed by a Depth-First algorithm.

11. Begin burst mode searching through this problem by clicking the Start button. Take notes of the

shape of the search tree, and the way in which it grows.

12. Reset the problem and try your own Initial state and Goal state using the Burst mode. Again take

notes of the shape of the search tree, and the way in which it grows.

(c) Breadth-First Search

1. Click the Reset button.

2. n the Search algorithm dropdown, choose Breadth-First Search 3. In the Control mode dropdown, choose Single-step.

4. Set the following 8-Puzzle problem:

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise3continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 6 of 11

START BOARD: GOAL BOARD:

023 123 145 845 876 076

5. Click the Start button to begin the search process.

6. Use Next and Back buttons to step through this problem. Watching and verifying the algorithm text

coincides with the algorithm animation.

7. Once the goal is reached, click the Reset button.

8. IntheControlmodedropdown,chooseBurstmodetoseeavisualoverviewofthesearchspacetree

constructed by Breadth-First Search.

9. Begin burst mode searching through this problem by clicking the Start button. Take notes of the

shape of the search tree, and the way in which it grows.

10. Reset the problem and try your own Initial state and Goal state using the Burst mode. Again take

notes of the shape of the search tree, and the way in which it grows.

(d) Breadth-First vs. Depth-First Comparison

1. You can compare Breadth-First and Depth-First with a Burst Mode Algorithm Race. Either team up with the person sitting next to you, or invoke a second copy of the animation software by opening the URL for the AI-Search Product page again (see above).

2. Choose Depth-First Search for one instance and Breadth-First Search for the other.

3. Set both instances with the following Initial state and Goal state:

START BOARD: GOAL BOARD:

023 123 145 804 876 765

4. Click Start simultaneously.

5. Take notes of the shape of the search tree, and the way in which it grows.

6. Try a few problems of your own choice. Try to find some problems that the Depth-First Search can

solve quickly and some others that the Breadth-First Search can solve quickly

(e) Relative Size of Search Space Tree

1. From any given starting 8-Puzzle game board, there are a very large number of possible game boards that could be produced by following the rules of the puzzle. If we assume that from any possible arrangement of the 8-Puzzle we can get to any other possible arrangement of the 8-Puzzle then we essentially have a permutations problem. How many permutations of the 8-Puzzle board exist ?

2. To gain some small appreciation of the relatively small portion of the ”entire” search space some algorithms actually explore, have a look at the following two problems:

START 1: GOAL 1: START 2: GOAL 2:

123 124 123 123 456 356 456 804 780 780 780 765

The animation shows the relationship between the underlying search space and the nodes that are actually realised in computer memory.

RMIT AI 2021 (Semester 2) – ̃a

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 7 of 11

2 Informed Search

PART4: Informedsearchinamap………………………………………………………… Consider the problem of getting from Arad to Bucharest in Romania and assume the straight line distance (SLD) heuristic will be used.

(a) Give the part of the search space that is realised in memory and the order of node expansion for: i. Greedy search assuming that a list of states already visited is maintained.

Solution:

(T , 329)

1

(A, 366) 2

(S, 253) 3

(F, 178)

(B,0)

(Z, 374)

(R, 193)

(O, 380)

Greedy search for Bucharest, using the straight-line distance to Bucharest as the heuristic function hSLD. Nodes are labelled with their h-values.

The order of nodes chosen for expansion: Arad, Sibiu, Fagaras. We don’t need to expand Bucharest, as greedy search is not optimal, so once we find the goal node during an expansion, that is enough.

ii. A∗ search assuming that a list of states already visited is maintained.

Hints:

• Nodes should be labelled with f = g + h, where h values are the straight-line distances to Bucharest.

• The order of nodes chosen for expansion: Arad, Sibiu, Rimnicu, Pitesti, Fagaras, Bucharest.

• Note, that Fagaras is chosen for expansion before Bucharest. This is, in part, because we only

check for the goal state when we expand a node because it had the lowest f value.

Please give it a fair try with the above hints. If you still need additional guidance, please check this video. However, note that if you needed to rely on the video, then there was probably insufficient learning. The best & expected case would be that you use the video to verify your approach. 🙂

(b) How would the above searches differ if the list of states already visited is NOT maintained?

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise4continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 8 of 11

Solution: If the list of states already visited is NOT maintained, then there should be an extra node ”Arad” added to both the above figures. For example, for greedy search:

(T , 329)

(A, 366)

1

(A, 366)

2

(S, 253)

(Z, 374)

(R, 193)

3

(F, 178)

4

(B,0)

(O, 380)

(c) How do the above searches perform for planing a trip from Iasi to Fagaras? You do not need to do the detailed search (you are not given the heuristic function to Fagaras), but use the graphical map to extract the straight distance.

PART 5: Heuristic choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

(a) Suppose we run a greedy search algorithm with h(n) = −g(n). What will this do to the search? (Recall

that g(n) is the cost incurred from the initial state to the current state)

(b) Suppose we run an A∗ algorithm with h(n) = 0. What sort of search will result?

(c) Explain why the set of states examined by A∗ is a subset of those examined by breadth-first search when the cost of every step is always 1.

Solution: If the list of states visited is not maintained when appying the greedy search, the search will get stuck at ”Neamt”, because it gives the lowest h value (straight-line distance). If the visited states are checked and not allowed to be revisited, then both the greedy and A∗ searches should be able to find Fagaras eventually, but they might perform differently (similar to the above two figures).

Solution: As g(n) is the cost from the initial state to the current state; if the algorithm is trying to minimise f(n), then the heuristic h(n) = −g(n) will always expand the node that is furthest from the initial state. Therefore the search will try to get as far away from the initial state as it can, with no concern for where the goal state is.

Solution: (Note: this is a partial solution) A∗ with h(n) = 0 is the same as …. search, which always expands the node with the smallest cost (or distance in the above example) to the initial state.

Solution: Breadth-first search can be seen as an A∗ search with g(n) being the depth of the search node n and h(n) = 0. So, in BFS, the decision for considering a state is based solely on its distance from the start state, so it will always be more “conservative” than any A∗. Using any heuristic that is better than just zero, will cause the search to explore less nodes.

Refer to “heuristic domination” in the book (Section 4 in edition 2). Also, in section 4.2.3 of Luger, G.F. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Addison-Wesley (edition 2002) one can find a proof that shows that better heuristics makes A∗ explore less nodes.

(d) Consider the following problem (historically formulated as the (problematically colonial) missionaries and cannibals problem in the book):

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise5continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 9 of 11

Three thieves are fleeing across the countryside after successfully completing their biggest heist ever: a raid on RoboCorp, the biggest robotics corporation in the world. Each thief managed to steal one RoboBuddyTM prototype, the worlds most sophisticated general-purpose robot (worth upwards of a million dollars each!). They reach a river and see a rickety boat by the shore. The boat can only take a maximum of two thieves, two robots (the robots can learn to row a boat by watching the humans) or one thief and one robot at once. The thieves don’t trust each other, and know that if a situation occurs where there are more robots than thieves on either river bank at any one time, then their accomplices would run off and take a larger cut for themselves. Find a way to get the three thieves and their robots safely across the river, without anyone being double-crossed.

Is there a heuristic that would be useful for this problem? What about a generalised version of this problem where we have n thieves with robots and c boat capacity?

Solution: (Note: this is a partial solution) We can represent a state for this problem as follows: (# robots on the left, # thieves on the left, (1 if boat is at left, else 0))

One obvious heuristic measuring the goodness of a state would seem to be: h(n) = #robots on the left + #thieves on the left

initially 6, goal 0 (or doing the same for the opposite river bank). However, if we take the following state (2, 0, 1), then we can move the final two robots across in one move; but the heuristic tells us two moves, which is an overestimate, making the heuristic inadmissible.

Therefore, we must use a heuristic that is something like:

h(n) = ….

When considering the generalised version of this problem, we are allowed to have a greater boat capacity, c. If we let c = 3 then the state (3, 0, 1) would overestimate, as we could take the final 3 robots across in one move, when the heuristic says that h(n) = 3 − 1 = 2, making it inadmissible again.

Therefore, for the general problem, we should use a heuristic that is more along the lines of:

h(n) = #robots on the left + #theives on the left ….

This way, it does not matter what the state is, or the capacity of the boat. If the remaining thieves and robots can be moved in one go, this will be captured by the heuristic and it will never overestimate.

If we show moves as #robots + #thieves in boat with F = forward, B = back indicating the direction of travel, a solution would look like:

(3,3,1) → (2,2,0) → (2,3,1) → …. 1+1F 0+1B 2+0F

(e) Which of the following are true and which are false? Explain your answers.

i. Depth-first search always expands at least as many nodes as A* search with an admissible heuristic.

ii. h(n) = 0 is an admissible heuristic for the 8-puzzle.

Solution: True: h(n) = 0 is always an admissible heuristic, since costs are nonnegative.

iii. A* is of no use in robotics because percepts, states, and actions are continuous.

Solution: False:aluckyDFSmightexpandexactlydnodestoreachthegoal.A*largelydominates any graph-search algorithm that is guaranteed to find optimal solutions.

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise5continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 10 of 11

Solution: False: A* search is often used in robotics; the space can be discretized or skeletonized. iv. Breadth-first search is complete even if zero step costs are allowed.

Solution: True: depth of the solution matters for breadth-first search, not cost.

v. Assume that a rook can move on a chessboard any number of squares in a straight line, vertically or horizontally, but cannot jump over other pieces. Manhattan distance is an admissible heuristic for the problem of moving the rook from square A to square B in the smallest number of moves.

PART6: Moreheuristics…………………………………………………………………..

(a) Does A∗’s search time always grow at least exponentially with the length of the optimal solution?

(b) If h(·) is admissible and s is the start node, how is h(s) related to the cost of the solution found by A∗ search?

(c) If h(n) is the heuristic cost to the goal, then h∗(n) is the actual cost to goal. Prove that if h(n) = h∗(n) for all nodes n, then whenever A∗ expands a node x, x must lie on an optimal path to a goal.

Solution: False: a rook can move across the board in one move, although the Manhattan distance from start to finish is 8.

Solution: (Note: this is a partial solution) No. It will be exponential in the worst case but there are examples where it will not. Can you give an example when it will not be perfect? What would be the best it could happen?

Solution: h(s) will be less or equal than the solution found by A∗. Because it is admissible, h(s) ≤ h∗(s), where h∗(s) is the (real) cost of the optimal solution from s. Now, because h(s) is admissible, we know that A∗ will find an optimal solution, that is, will find a solution with cost h∗(s). This is provided the heuristic is consistent, or A∗ is being run without a closed list.

Solution: (Note: this is a partial solution) There are many ways to prove this one. Here is one:

Suppose x is node that has been expanded but it is not in the optimal path to a goal. Suppose further that h(s0) = h∗(s0) = c∗, that is, the cost of the optimal path from the initial state is c∗. First, if xo is a node on an optimal path to the goal, we know that f(xo) = …. = c∗ So, if x was at any point expanded, it means that f(x) < f(xo) for some node xo lying on an optimal path to the goal, that is, at some point of the search x was preferred over nodes lying on the optimal path. This means that g(x) + h(x) < c∗, and because h(x) = h∗(x), we get that .... . But this means that going thought node x is actually cheaper .... , a contradiction. Thus, that node x expanded but not lying on an optimal path to the goal cannot exist.
(d) (tricky!) Prove that if a heuristic is consistent, then it is also admissible. What about the converse?
Solution: (This is not the proof but a strong hint how to do it)
By induction on the length of the optimal path. The basic case is to consider any node n that is directly connected to a goal state g, and so h(n) ≤ c(n, h) + h(g) = c(n, h) = h∗(g) (assuming g is the cheapest goal connected to n). For the induction case, suppose h is admissible for any state n such that h∗(n) ≤ k, for some k > 0.

The converse is not true (a heuristic can be admissible but not consistent): find a counter-example (hint: consider a graph that is a path from the initial state to the goal state).

(e) (tricky!) Prove that A∗ without remembering nodes (i.e., tree search; without a closed list) is optimal when using an admissible heuristic.

RMITAI2021(Semester2)-SebastianSardin ̃a Exercise6continuesonthenextpage…

COSC1127/1125 – AI Tutorial 2: Uninformed and Informed Search Page 11 of 11

Solution: The main idea of the proof is that at any point in the tree search if a non-optimal goal node GisintheopenlistandanodeN inthepathtoanoptimalgoalnodeG∗ isintheopenlist,thenN will be expanded first. This goes all the way towards G∗ which ultimately will also be expanded before non-optimal node G.

See a sketchy but quite detailed proof that I did here. You should now make it a full proof and reconstruct the reasoning there!

It is important that we are relying on TREE SEARCH (i.e., not remembering nodes), so that if we get to a previously visited state x in a cheaper way, we keep and process it, as this could be the way to the goal!. (If we use a CLOSED list, i.e., GRAPH SEARCH, then x would be ignored and hence optimality lost (unless heuristic is monotonic/consistent).

However, we can use GRAPH SEARCH together with node ”reconsideration”, as in AIMA Figure 3.7 where we re-visit a node if the cost so far to the state is cheaper than the last visit recorded to such state. In that case, yes, we can use GRAPH SEARCH as well.

Two good posts on A* optimality and tree-vs-graph search are this and this.

RMIT AI 2021 (Semester 2) – ̃a

End of tutorial worksheet