CS代考 Tutorial questions: topic 4 – cscodehelp代写

Tutorial questions: topic 4
1. The pseudo-code below defines the control loop for agent1, a practical reasoning agent.
1. B:=B0; /⋆ B0isinitialvalueofB⋆/
2. I:=I0; /⋆ I0isinitialvalueofI⋆/
3. while true do
4. get next percept ρ;
5. B:= brf(B, ρ);
6. D:= options(B, I);
7. I:= filter(B, D, I);
8. π:= plan(B, I);
9. while not (empty(π) or succeeded(I, B)
or impossible(I, B)) do
10. α:= hd(π);
11. execute(α);
12. π:= tail(π);
13. get next percept ρ;
14. B:= brf(B, ρ);
15. if reconsider(I, B) then
16. D:= options(B, I);
17. I:= filter(B, D, I);
18. end-if
19. if not sound(π, I, B) then
20. π:= plan(B, I);
21. end-if
22. end-while
23. end-while
(a) With reference to the pseudo-code that defines agent1 identify and explain the role of the variables and functions that ensure the deliberation of the agent and the means-ends reasoning of the agent.
Deliberation is carried out via two functions. The options function takes the agent’s current beliefs B and current intentions I, and on the basis of these produces a set of desires D. The filters function selects from these competing desires D, taking into account its current beliefs B and current intentions I and returning a new set of intentions.
Means-ends reasoning is carried out via the plan function, which, on the basis of the current beliefs B and current intentions I, determines a plan π to achieve the current intentions I.
(b) One of the aims of the pseudo-code that defines agent1 is to ensure that the agent is committed to plans just as long as it is rational to be committed to them. With reference to the pseudo-code, explain how the control loop achieves this. In your answer, be sure to clearly identify all the circumstances under which an agent reconsiders its plan.
1

After each action that is performed (line 11) the agent will check the environ- ment (line 13), update its beliefs (line 14), possibly reconsider its intentions (lines 15 – 17) and then if its plan is no longer sound (line 19) it will replan (line 20).
The agent will also replan if it finds at any point that its intentions are already succeeded or are impossible (line 9). If this is the case it will observe the envi- ronment again and update its beliefs (lines 4 and 5), reconsider its intentions (lines 6 and 7) and replan (line 8).
(c) Another aim of the pseudo-code that defines agent1 is to ensure that the agent is committed to intentions as long as it is rational to be committed to them. With reference to the pseudo-code, explain how the control loop achieves this. In your answer, be sure to clearly identify all the circumstances under which an agent reconsiders its intentions.
After each action that is performed (line 11) the agent will check the environ- ment (line 13), update its beliefs (line 14), check based on these new beliefs and its intentions whether it should reconsider its intentions (line 15, if recon- sider(I,B) returns true) and if so will reconsider its intentions.
The agent will also reconsider its intentions if it finds at any point that they are already succeeded or impossible (line 9). If this happens it will observe the environment again and update its beliefs (lines 4 and 5), and then reconsider its intentions (lines 6 and 7).
2. Using examples to illustrate your answer, explain how an agent’s behaviour is af- fected by its intentions and the attitudes you can expect an agent to have towards its intentions. The examples you use can be about human agents or artificial agents.
You should add your answer to the “Tutorial exercise: question 2” forum on KEATS, which you can find under the mandatory resources for this topic.
I encourage you to discuss each other’s answers with one another on the forum, and we will aim to discuss some answers together during the timetabled session for this topic.
Agent will devote resources into trying to work out how to bring about its intention (means end reasoning). Agent will devote resources to trying to achieve its intentions — it doesn’t just give up if attempts fail and there are alternative ways it can try. It won’t adopt inconsistent intentions. It won’t have an intention that it believes is impossible or inevitable, but also agents are conscious that they may not achieve their intentions. Agents don’t necessarily need to intend all the expected side effects of their intentions.
For example, if a student has an intention to have an academic career, you would expect them to spend some resources on working out how they’re going to try to achieve it (apply for phd positions, get a phd, make contacts, write papers, apply for post doc positions, apply for fellowships etc). If the student doesn’t get a particular job they apply for you wouldn’t expect them to immediately give up on their intention to have an academic career. Instead you would expect them to apply for other jobs. There’s no point me having an intention to have an academic career, since I already have one. Equally, it wouldn’t be rational for someone in their 90s who has worked in the fashion industry all their life to have an intention to have a academic career in science, since this is clearly not achievable. A student wouldn’t have an intention both to have an academic career and a career in the fashion industry, since you can’t
2

simultaneously do both. And it’s rational for me to intend to travel up to stay with my family over xmas even though I believe that journeys around xmas are always busy and horrible, and I don’t have an intention to suffer through a horrid busy journey.
3

Leave a Reply

Your email address will not be published. Required fields are marked *