CS代考 7CCSMAMS: Agents & Multi-Agent Systems – cscodehelp代写

7CCSMAMS: Agents & Multi-Agent Systems

Agent architectures

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

1

Response to preliminary module feedback
Reasoning agents
Specifying rules for agent behaviour
Alternative architectures for agents

Today

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

Preliminary module feedback summary
…will be presented in the lecture once the survey has been completed by students

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

Motivating example: autonomous vehicle convoy
An autonomous vehicle in a convoy needs rules for staying in formation and reaching its destination.
But it also needs autonomy to react to situations.
How can we specify rules of behaviour while retaining autonomy to react?

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

General motivating questions for today
What can it mean for an agent to reason about what best to do to meet its goals?
How can reasoning logic be specified while allowing for current circumstances to be accounted for?
If reacting to the environment is the primary concern, then is it better to start from considering how to do this rather than how to reason?

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

6

Deductive reasoning agents
Classical approach to building agents: view them as a particular type of knowledge-based system, bring all the associated methodologies of such systems to bear.
Symbolic AI paradigm.
Contains an explicitly represented, symbolic model of the world.
Makes decisions (for example about what actions to perform) via symbolic reasoning.
Deductive reasoning = particular symbolic approach where representations are logical formulas and reasoning used is logical deduction (theorem proving).

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

6

7

Deductive reasoning agents
Two key problems:
The representation/reasoning problem:
how to symbolically represent information about complex real-world entities and processes, and how to get agents to reason with this information in time for the results to be useful. Includes: knowledge representation, automated reasoning, automatic planning.
The transduction problem:
how to translate the real world into an accurate, adequate symbolic description, in time for that description to be useful. Includes: computer vision, speech understanding, audio processing.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

7

8

Deductive reasoning agents
Most researchers accept that neither problem is anywhere near solved.
Underlying problem lies with the complexity of symbol manipulation algorithms in general: many (most) search-based symbol manipulation algorithms of interest are highly intractable.
(See advantages of reactive agents later.)

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

8

9

Deductive reasoning agents
How can an agent decide what to do using theorem proving?
Basic idea: use logic to encode a theory stating the best action to perform in any given situation.
Let:
ρ be this theory (typically a set of rules)
Δ be a logical database that describes the current state of the world
Ac be the set of actions the agent can perform
Δ |-ρ ϕ means that ϕ can be deduced from Δ using ρ

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

9

10

Deductive reasoning agents
ρ = { Day(tuesday)  Time(9am)  Do(goToAgentsLecture),
Day(friday)  Time(7pm)  Do(goToDinner) }

Δ = { Day(friday),
Time(7pm) }

Ac = { goToAgentsLecture, goToDinner }

Δ |ρ ??

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

10

11

Deductive reasoning agents
ρ = { Day(tuesday)  Time(9am)  Do(goToAgentsLecture),
Day(friday)  Time(7pm)  Do(goToDinner) }

Δ = { Day(friday),
Time(7pm) }

Ac = { goToAgentsLecture, goToDinner }

Δ |ρ goToDinner

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

11

12

Deductive reasoning agents
/* try to find an action explicitly prescribed */
for each a ∈ Ac do
if Δ |ρ Do(a) then
return a
end-if
end-for
/* try to find an action not excluded */
for each a ∈ Ac do
if Δ |ρ ¬Do(a) then
return a
end-if
end-for
return null /* no action found */

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

12

13

Deductive reasoning agents
ρ = {Day(saturday)  ¬ Do(goToAgentsLecture)
Day(tuesday)  Time(9am)  Do(goToAgentsLecture)
Day(friday)  Time(7pm)  Do(goToDinner) }

Δ = { Day(saturday), Time(7pm) }

Ac = { goToAgentsLecture, goToDinner }

Δ |ρ ??

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

13

14

Deductive reasoning agents
ρ = {Day(saturday)  ¬ Do(goToAgentsLecture)
Day(tuesday)  Time(9am)  Do(goToAgentsLecture)
Day(friday)  Time(7pm)  Do(goToDinner) }

Δ = { Day(saturday), Time(7pm) }

Ac = { goToAgentsLecture, goToDinner }

Δ |ρ goToDinner

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

14

15

Deductive reasoning agents
An example: The Vacuum World.
Goal is for the robot to clear up all dirt.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

15

16

Deductive reasoning agents
Use 3 domain predicates to solve problem:
In(x, y) agent is at (x, y)
Dirt(x, y) there is dirt at (x, y)
Facing(d) the agent is facing direction d
Possible actions:
Ac = {turn, forward, suck}

(turn means “turn right”)

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

16

17

Deductive reasoning agents
Rules ρ for determining what to do:
In(x,y) ∧ Dirt(x,y) → Do(suck)
In(0,0) ∧ Facing(north) ∧ ¬Dirt (0,0) → Do (forward)
In(0,1) ∧ Facing(north) ∧ ¬Dirt (0,1) → Do (forward)
In(0,2) ∧ Facing(north) ∧ ¬Dirt (0,2) → Do (turn)
In(0,2) ∧ Facing(east) → Do (forward)
…and so on!

Using these rules (+ other obvious ones), starting at (0,0) the robot will clear up dirt

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

17

18

Planning agents
Rather than theorem proving, planning can be a more intuitive way of reasoning to meet goals. Planning agents find a sequence of actions that transforms an initial state into a goal state.

G

a1
a17
a142

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

18

19

Reasoning via planning: STRIPS example
The stack action occurs when the robot arm places the object x it is holding is placed on top of object y.
Stack(x, y)
pre Clear(y) ∧ Holding(x)
del Clear(y) ∧ Holding(x)
add ArmEmpty ∧ On(x, y)
Similarly, there are unstack, pickup, putdown… actions

A
B

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

19

20

Problems with deductive reasoning agents
Problems:
How to convert video camera input to Dirt(0, 1) or On(B, Table)?
Decision making using first-order logic is undecidable
Even where we use propositional logic, decision making in the worst case means solving co-NP-complete problems
Decision making assumes a static environment

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

20

21

Calculative rationality
“An agent is said to enjoy the property of calculative rationality if and only if its decision-making apparatus will suggest an action that was optimal when the decision-making process began” – time t1, the action function of the agent takes as input its internal state at that point Δ1 and uses its rule set ρ to determine what the best action to perform is. After some time, at t2, it manages to establish Δ1 |ρ  for some Ac.
If the agent has the property of calculative rationality, then  is an action that is guaranteed to be optimal at time t1.
What if the world has changed between t1 and t2?
In that case  is no longer guaranteed to be optimal.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

21

Accommodating dynamic environments
A reasoning agent in a dynamic environment, or in a multi-agent system in any environment, needs to meet two demands
It needs to be able to apply rules to determine what to do
It needs to not be constrained by those rules in having to do something now, when it may have to react to other events
We need rules to talk about acting ‘sometime in the future’ and similar, e.g. using temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

23

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

23

24

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

24

25

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

25

26

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

26

27

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

27

28

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

28

29

Temporal logic

¬ year (2020)

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

29

30

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

30

31

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

31

32

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

32

33

Temporal logic

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

33

Exercise
We can nest temporal operators and use them with the standard logical operators (negation: ¬, conjunction: ∧, disjunction: ∨, implication: →). What do the following formulas say?
☐(q → ☐ ¬p)
○(◎p))
((lightOn → ○ ¬lightOn) ∧ (¬lightOn → ○ lightOn)) S lightOn

Write the following statement as a temporal logic formula, using the predicates study(agents, you) and knowGreat(agents, you).
Unless you study agents, you won’t know how great they are.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

35

Concurrent MetateM
Given a temporal logic, we now need a way to express behavioural rules.
An example framework is Concurrent MetateM.
In this framework, an agent’s specification is executed directly to generate behaviour.
Concurrently executing agents communicate via asynchronous broadcast message passing.
Agents are defined by:
the set of message types that they listen for,
the set of message types that they broadcast,
a set of temporal logic formulas that determine their behaviour.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

35

36

Concurrent MetateM
Execution proceeds by a process of continually matching rules against a “history”, and firing those rules whose antecedents are satisfied.

The instantiated future-time consequents become commitments which must subsequently be satisfied.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

36

37

Concurrent MetateM

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

37

38

Concurrent MetateM

Agent identifiers

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

38

39

Concurrent MetateM

Message types each agent listens for

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

39

40

Concurrent MetateM

Message types each agent can broadcast

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

40

41

Concurrent MetateM

Temporal logic formulas that determine agent behaviour

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

41

42

Concurrent MetateM
Agent execution:
at each time point:
1. Set environment propositions according to messages received;
2. Check which rules fire by comparing antecedents with history;
3. Jointly execute fired rule consequents together with commitments carried over from previous cycles.

When trying to satisfy commitments, agent gives preference to its oldest commitments.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

42

43

Concurrent MetateM

What is the behaviour of this system?

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

43

44

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

44

45

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

45

46

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

46

47

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

47

48

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

48

49

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

49

50

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

50

51

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

51

52

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

52

53

Concurrent MetateM

A possible run of system

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

53

54

Concurrent MetateM

Satisfaction of commitments

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

54

Exercise
Trace the first 6 time steps of the system below.
snowWhite (ask(X)) [give(X)]: ◎ ask(X) → ◇give(X)
start → □ ¬(give(X) ⌃ give(Y) ⌃ X ≠ Y)
eager (give(eager)) [ask(eager)]: start → ask(eager)
◎ give(eager → ask(eager)
greedy () [ask(eager)]: start → □ ask(greedy)
courteous (give(eager), give(greedy)) [ask(courteous)]:
((¬ ask(courteous) S give(eager)) ⌃ (¬ ask(courteous) S give(greedy))) →
ask (courteous)
shy (give(shy), ask(X)) [ask(shy)]: start → ◇ask(shy)
◎ ask(X) → ¬ ask(shy)
◎ give(shy) → ◇ask(shy)

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

55

56

Reasoning versus reactivity
Temporal logic allows us to separate what happens now from what happens sometime, allowing some reactivity.
But the reasoning is still resource demanding, and the connection between the real environment and the symbols representing it is problematic.

We could start our agent design from their reactivity rather than their reasoning.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

56

57

Brooks: Subsumption architecture
Brooks put forward three theses:
Intelligent behaviour can be generated without explicit representations of the kind that symbolic AI proposes.
Intelligent behaviour can be generated without explicit abstract reasoning of the kind that symbolic AI proposes.
Intelligence is an emergent property of certain complex systems.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

57

58

Brooks: Subsumption architecture
He identifies two key ideas that have informed his research:
Situatedness and embodiment: ‘Real’ intelligence is situated/embedded in the world, not in disembodied systems such as theorem provers or expert systems.
Intelligence and emergence: ‘Intelligent’ behaviour arises as a result of an agent’s interaction with its environment. Also, intelligence is ‘in the eye of the beholder’; it is not an innate, isolated property.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

58

59

Brooks: Subsumption architecture
A subsumption architecture is made up of task-accomplishing behaviour modules.
No (or not much) processing of data.
No complex symbolic representations.
No symbolic reasoning.

if SITUATION then ACTION

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

59

60

Brooks: Subsumption architecture
Many behaviours might fire at once. Which one to perform?

Behaviours organized into a hierarchy. Lower layers take precedence over higher layers.

In terms of computation, extremely simple. But still can perform impressive tasks!

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

60

61

Brooks: Subsumption architecture
Flikr: Penn State

Flikr: ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

61

Brooks: Subsumption architecture

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

62

63

Example subsumption architecture application
Steels’ Mars explorer system, using the subsumption architecture, achieves near-optimal cooperative performance in simulated ‘rock gathering on Mars’ domain.

The objective is to explore a distant planet, and in particular, to collect samples of a precious rock. The location of the samples is not known in advance, but it is known that they tend to be clustered

Mother ship emits a radio wave.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

63

64

Example subsumption architecture application
For individual (non-cooperative) agents, the lowest-level behaviour, (and hence the behaviour with the highest “priority”) is obstacle avoidance:
If detect an obstacle then change direction
If carrying samples and at the base then drop samples
If carrying samples and not at the base then travel up radio wave gradient
If detect a sample then pick sample up
If true then move randomly

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

64

65

Example subsumption architecture application
if detect an obstacle then change direction
if carrying samples and at
the base then drop samples
if carrying samples and not at the base
then travel up radio wave gradient
if detect a sample then pick sample up
if true then move randomly
percept
action

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

65

66

Example subsumption architecture application
if detect an obstacle then change direction
if carrying samples and at
the base then drop samples
if carrying samples and not at the base
then travel up radio wave gradient
if detect a sample then pick sample up
if true then move randomly
percept
action

Environment
Agent

see
action

see : E → Per

action : Per → Ac

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

66

67

Example subsumption architecture application
For cooperative agents, the lowest-level behaviour, is obstacle avoidance:
If detect an obstacle then change direction
If carrying samples and at the base then drop samples
If carrying samples and not at the base then travel up radio wave gradient and drop 2 crumbs
If detect a sample then pick sample up
If sense crumbs then pick up 1 crumb and travel down radio wave gradient
If true then move randomly

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

67

68

Example subsumption architecture application
if detect an obstacle then change direction
if carrying samples and at
the base then drop samples
if carrying samples and not at the base then travel up radio wave gradient and drop 2 crumbs
if detect a sample then pick sample up
if true then move randomly
percept
action
if sense crumbs then pick up 1 crumb
and travel down radio wave gradient

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

68

69

Example subsumption architecture application
Steels shows that cooperative agents achieve near-optimal performance in many situations.

Solution is cheap and robust.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

69

70

Discussion

What are the limitations of reactive agents, such as those using the subsumption architecture?

In what cases would it be hard to engineer practical agent(s) using this approach?

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

70

71

Hybrid architectures
Many researchers have argued that neither a completely deliberative nor completely reactive approach is suitable for building agents.
They have suggested using hybrid systems, which attempt to marry classical and alternative approaches.

An obvious approach is to build an agent out of two (or more) subsystems:
a deliberative one, containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI;
a reactive one, which is capable of reacting to events without complex reasoning.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

71

72

Hybrid architectures
Often, the reactive component is given some kind of precedence over the deliberative one.

This kind of structuring leads naturally to the idea of a layered architecture, of which TOURINGMACHINES and INTERRAP are examples.

In such an architecture, an agent’s control subsystems are arranged into a hierarchy, with higher layers dealing with information at increasing levels of abstraction.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

72

73

Hybrid architectures
A key problem in such architectures is what kind of control framework to embed the agent’s subsystems in, to manage the interactions between the various layers.

Horizontal layering
Layers are each directly connected to the sensory input and action output.
In effect, each layer itself acts like an agent, producing suggestions as to what action to perform.

Vertical layering
Sensory input and action output are each dealt with by at most one layer each.

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

73

74

Hybrid architectures
74

m possible actions suggested by each layer, n layers
Introduces bottleneck
in central control system
Not fault tolerant to layer failure

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

74

Next week we move on to multi-agent systems, starting with agent-based modelling (simulation) before focusing on methods for achieving coordination, competition and agreement between agents.

Next week

Slide ‹#› out of 95

Some slides adapted from slides made available by M. Wooldridge, S. Parsons & T. Payne http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e

75

Leave a Reply

Your email address will not be published. Required fields are marked *