CS代考 Lecture 1: Introduction – cscodehelp代写

Lecture 1: Introduction

Coordination

1

Motivating example: Global logistics chains
In global logistics, delivery of materials is delegated to transport companies to handle each journey stage
This needs coordination: each stage must take the material from the previous stage
Automated agents can act for companies, communicating and coordinating

Slide ‹#› out of 88

Today
Communication
How do agents understand each other?
Cooperation
How do agents divide large tasks between themselves?
Coordination
How do agents help and not hinder each other?
Expectation
How do agents know when they can rely on others to do things?

Slide ‹#› out of 88

Communication
Agents are autonomous and self-interested.

One agent can’t force another to perform some action.

One agent can’t write data onto the internal state of some other agent.

Instead they can perform communicative actions to try to influence other agents.

Slide ‹#› out of 88

Speech act theory
Speech act theory treats communication as action. It is a theory of how utterances are used to achieve intentions.
[1962] identified three aspects of speech acts.
The locutionary act: the act of making the utterance
Saying “Please make me some tea”
The illocutionary act: the conveying of the speaker’s intentions to the hearer
The speaker has requested the hearer to make some tea
The perlocution: the effect of the act
The hearer (perhaps) makes some tea.
A speaker makes a locutionary act with hope that the hearer correctly recognises the speaker’s intentions and acts accordingly.

Slide ‹#› out of 88

Context and speech acts
Context can be very important for correctly understanding the speaker’s intentions.

Example
Do you want a coffee?
Coffee will keep me awake.

Slide ‹#› out of 88

Classes of speech act
[1969] identified five key classes of speech acts.
Representatives commit the speaker to the truth of an expressed proposition, e.g. ‘it’s raining’.
Directives attempt to get the hearer to do something, e.g. ‘please make the tea’.
Commissives commit the speaker to doing something, e.g. ‘I promise to come to the lecture on Thursday’.
Expressives express some mental state, e.g. ‘thank you!’.
Declarations entail the occurrence of an action in themselves, so effect some changes in an institutional state of affairs, e.g. declaring war, ‘We the jury find the defendant to be guilty’.

Slide ‹#› out of 88

Performatives and content
We can think of a speech act as having two components:
a performative verb (its illocutionary force, e.g. request, inform, promise…);
propositional content (e.g. ‘the door is closed’).
performative = request
content = ‘the door is closed’
speech act = ‘please close the door’
performative = inform
content = ‘the door is closed’
speech act = ‘the door is closed!’
performative = inquire
content = ‘the door is closed’
speech act = ‘is the door closed?’

Slide ‹#› out of 88

Agent communication languages
Autonomous heterogeneous agents in a multi-agent system require shared agent communication languages (ACLs) so they can communicate with one another.
For example, KQML is an ACL comprising two parts:
The knowledge query manipulation language (KQML) that defines performatives, e.g.
ask-if (‘is it true that. . . ’), perform (‘please perform the following action. . . ’), tell (‘it is true that. . . ’)
The knowledge interchange format (KIF) for expressing message content. This language uses predicates to express facts and relationships, e.g.
(> (* (width chip1) (length chip1)) (* (width chip2) (length chip2)))
KIF expressions are wrapped in KQML statements to form the full message to be communicated between agents.

Slide ‹#› out of 88

ACL semantics based on planning actions
An ACL must have a semantics defining what communications mean, so agents know what to communicate to achieve goals.
One approach is to treat communications like AI planning actions.
An agent reasons about which speech act to perform in the same way that it reasons about normal (non-communication) actions.
For example, the semantics for a request: request(s, h, φ)
pre: s believes (h can do φ)
s believes (h believes (h can do φ))
s believes (s want φ)
post: h believe (s believe (s want φ))
Receiving a request does not on its own cause the hearer to perform the action: there is a separation between the illocutionary act and the perlocutionary act.

Slide ‹#› out of 88

ACL semantics based on mental states
Alternatively, we might specify an ACL in terms of mental states.
For example, for an inform message: inform(s, h, )
Feasibility precondition: (
Rational effect:
– agent believes
– agent has a definite opinion on the truth or falsity of
– agent is uncertain about the truth or falsity of
This semantics allows logical reasoning and so deduction of best communications to achieve utility.
But one agent can still never be sure whether other agents are conforming to these semantics.

Slide ‹#› out of 88

ACL semantics based on social commitments
A third approach is to define ACL semantics based on public commitments.
Express the meaning of a communicative act in terms of commitments directed from one agent to another.
If I inform you ‘p’, then I should not inform someone else ‘not p’ unless I also inform you that ‘p’ is no longer the case.
If I promise you that I will do task t by the end of the week, then I am expected to do task t by the end of the week.
An agent can check whether these expectations are violated or not.

Slide ‹#› out of 88

Discussion

Does an agent communication language like ACL remove all ambiguity from software agent communications?

Does it allow everything agents might want to communicate to be communicated?

Does it solve problems not solveable by more common (not agent-specific) data exchange formats?

Slide ‹#› out of 88

Cooperation
It is often necessary or preferable for agents to work together in order to achieve their goals.

Since agents are autonomous they make decisions about what to do at run-time, they must be capable of dynamically coordinating their activities and cooperating with others.

Compare this with traditional distributed and concurrent systems, where coordination and cooperation is typically hardwired at design time.

Slide ‹#› out of 88

Measuring cooperation
Bond and Gasser [1992] proposed two criteria for assessing how well agents work together.

Coherence is about how well the multiagent system behaves as a unit along some dimension of evaluation.
We can measure it in terms of solution quality, e.g. how efficiently resources are used, conceptual clarity.

Coordination is about the degree to which agents can avoid extraneous activity such as by synchronizing and aligning their activities.
If the system is perfectly coordinated, agents will not get in each others’ way, in a physical or a metaphorical sense.

Slide ‹#› out of 88

Cooperative Distributed Problem Solving (CDPS)
Cooperation involves agents determining where other agents can help with a complex problem and requesting that help.
In a Cooperative Distributed Problem Solving domain, each agent in a loosely-coupled network is capable of sophisticated problem-solving and can work independently, but no single agent has sufficient expertise, resources and information to solve a given problem, and agents differ in their expertise for solving parts of the problem [Durfee 1998].
A CDPS domain has characteristics such as the following:
Distributed data and control — no agent has sufficient information or control to solve entire problem
Modular problems with reasonable grain size
Spend more time on computation than communication

Slide ‹#› out of 88

The CDPS process
Cooperative Distributed Problem Solving involves these steps:
Problem decomposition
Subproblem allocation
Subproblem solution
Subproblem synthesis

Each step can be realised in many ways. Challenges include:
What level of granularity should a problem be decomposed to?
Does one agent have the expertise to decompose the problem?
How to decide which agents to allocate subproblems to?
How to reach agreement where agents refuse to take on subproblems?
What coordination is required to solve subproblems?

Slide ‹#› out of 88

Contract Net protocol
The Contract Net protocol addresses the sub-problem allocation step of CDPS, viewing cooperation as contract negotiation.
Recognition: An agent has a task it cannot achieve in isolation or could achieve with better quality through cooperation.
Announcement: The agent broadcasts an announcement of the task specification to be achieved. This includes:
description of the task itself (may be executable)
any constraints (e.g. task completion deadline, quality constraints)
meta-task information (e.g. bid submission deadline)
Bidding: Agents receiving the announcement decide whether they are capable and willing to bid for the task, and return a bid if so.
Awarding: The agent that announced the task decides who to award the contract to and communicates this to bidding agents.
Expediting: The successful contractor expedites the task. This can involve generating further manager-contractor relationships.

Slide ‹#› out of 88

18

Contract net via ACL performatives.
cfp (call for proposals): Used for announcing a task;
propose: Used for making a proposal.
accept, refuse: Used to indicate acceptance or rejection of a proposal.
inform, failure: Used to indicate completion of a task (with the result) or failure to do so.

Slide ‹#› out of 88

Contract Net domain
The collection of agents is the contract net.

Each agent can, at different times or for different tasks, be a manager or a contractor.

A hierarchy may be formed for decomposing a task, but there does not need to be a task-independent hierarchy in the net.

An agent could be delegated a subtask of a task it itself delegated, e.g. Agent A decomposes task T into T1 and T2 and delegates them, B decomposes task T1 into T1.1 and T1.2 and delegates T1.1 to A

Slide ‹#› out of 88

Contract Net example

S
S
S
S
S
S
S
S
S
S
S
S
S

P
P
P
P
P
M
Distributed sensing system. Detects, classifies and tracks vehicles, producing a dynamic map of the area.
Different types of sensor nodes (S). Processing nodes (P). Monitor node (M) initiates problem sharing and integrates solution.

Slide ‹#› out of 88

Contract Net example
Monitor integrates information from individual area maps.

Area map captures details of vehicles in a particular area.

Individual vehicles can be described by signal groups. May want to describe its velocity, direction etc.

Signals can be grouped together, for instance by frequency.

Signals monitored by sensors, e.g. audio, vibrations.

OVERALL AREA MAP
AREA MAP
SIGNAL GROUP
VEHICLE
SIGNAL

Data hierarchy

Slide ‹#› out of 88

Contract Net example
Task hierarchy
OVERALL AREA

AREA
GROUP
VEHICLE
CLASSIFICATION
LOCALISATION
TRACKING
SIGNAL

Slide ‹#› out of 88

Contract Net example
Area Contractor: integrate vehicle traffic into area map
Group Task Manager: Vehicle Task Manager:
oversee group contractors oversee vehicle contractors
Monitor Node: integrate area maps into overall map
Area Task Manager: oversee area contractors

Group Contractor: assemble signal features into groups
Signal Task Manager: oversee signal contractors
Signal Contractor: provide signal features
Vehicle Contractor: Integrate Vehicle Information
Classification/Localisation/Tracking Task Manager: oversee respective contractors
Classification Contractor: classify vehicle
Nodes are simultaneously workers and supervisors
Localisation Contractor: locate vehicle
Tracking Contractor:track vehicle

Note: Classification and Signal
Contractors can also communicate…

Slide ‹#› out of 88

24

Contract Net example
Example signal task announcement

To: *
From: 25
Type: Task Announcement
Contract: 22–3–1
Eligibility Specification: Must-Have SENSOR
Must-Have Position Area A
Task Abstraction: Task Type Signal
Position LAT 47N LONG 17E
Area Name A Specification (…)
Bid Specification: Position Lat Long
Every Sensor Name Type
Expiration Time: 28 1730Z FEB 1979

Slide ‹#› out of 88

25

Contract Net example
Example signal bid

To: 25
From: 42
Type: BID
Contract: 22–3–1
Node Abstraction:
LAT 47N LONG 17E
Sensor Name S1 Type S
Sensor Name S2 Type S
Sensor Name T1 Type T

Slide ‹#› out of 88

26

Contract Net example
Example signal award

To: 42
From: 25
Type: AWARD
Contract: 22–3–1
Task Specification:
Sensor Name S1 Type S
Sensor Name S2 Type S

Slide ‹#› out of 88

27

Contract Net applicability
Typical properties of suitable applications for the Contract Net protocol:
Distributed control and/or data
Levels of data abstraction
Careful selection of which agent should perform which task is important
Subtasks are of reasonable granularity and it’s worthwhile to expend effort to distribute them wisely

Slide ‹#› out of 88

Coordination
Coordination is about managing interdependencies between agents’ activities.
Negative interdependencies
Insufficient resource to allow both activities simultaneously, e.g. both walking through a doorway
Activities require mutually exclusive states, e.g. to sit an exam you can’t be contactable but if your partner is due to give birth you need to be
Positive interdependencies
An agent explicitly asks for help with activities
Both plan the same action
An agent plans an action with a side-effect of achieving or contributing to another’s goal, e.g. achieving preconditions of an action in the other’s plan

Slide ‹#› out of 88

Coordination approaches
We want to avoid or deal with negative interdependencies.
We want to recognise and benefit from positive interdependencies.

Some approaches to coordination:
Multi-agent planning
Joint intentions (teamwork)

Slide ‹#› out of 88

Multi-agent planning alternatives
Centralised planning for distributed agents
A centralised agent has all information (agents’ private goals, actions available) and uses this to determine an efficient plan for each agent.
Partial global planning
Agents plan for themselves, but also have partial information about the global plan of all agents.
Each agent shares information about their plan, and the receiver merges this with their partial view of the global plan and tries to improve the global plan and share it with others.
Merging of individual plans
Agents plan for themselves, share their plans with a centralised agent, who merges these to produce an efficient global plan.

Slide ‹#› out of 88

Multi-agent planning issues
Multi-agent planning has some notable challenges
It requires agents to be willing to share private information, which may not be acceptable in some domains
The combination and improvement of plans is a non-trivial problem

Slide ‹#› out of 88

Coordination through joint intentions
Joint intentions is a model of teamwork.
Just as we have individual intentions, we can have joint intentions for a team of agents.
Agents in team have collective intention towards the goal and have responsibilities to other agents in the team.

Slide ‹#› out of 88

Coordination through joint intentions
A group of agents have a collective commitment to bringing about some goal G; the motivation for this goal is M.

Initially every agent does not believe that G is satisfied, but believes that G is possible.

The termination condition is that it is mutually believed that either:
G is satisfied;
G is impossible;
M is not longer present.

Slide ‹#› out of 88

Coordination through joint intentions
Every agent has the goal G until the termination condition is satisfied.

Until the termination condition is satisfied:
If any agent believes G is achieved, then it will have a goal that this becomes a mutual belief
If any agent believes that G is impossible, it will have a goal that this becomes a mutual belief
If any agent believes that the motivation M for the goal is no longer present then it will have the goal that this becomes a mutual belief

Slide ‹#› out of 88

Expectations
To reason about how to act in a coordinated way with others, an agent needs expectations about what they will do
Expectations can be set through communication such as in multi-agent planning and joint intentions
They can also be set based on knowledge of the system
An agent can expect behaviour because it is normal (conventional) or normative (a required standard)
An agent can expect qualities of behaviour based on trust (past experience of their own) or reputation (past experience of others)

Slide ‹#› out of 88

Normative agent systems
A norm is an established, expected pattern of behaviour.
A norm is not a strict rule: it will not always be followed.

Can be prescriptive/enforced
In the UK, drive on the left
Do not plagiarise
The pope must be a male Catholic
Or not prescribed/enforced (conventions)
Buy a present for a friend’s birthday
Eat a roast dinner in the UK on a Sunday
Tip good service in a restaurant

Slide ‹#› out of 88

Formal representation of prescriptive norms
Target – who?
Deontic Component – Obligation, Prohibition, Permission
Context – when?
Content – what?
Punishment – what happens if you’re caught violating the norm?

< target, deontic, context, content, punishment >

< all-drivers, obligation, red light, stop, 3 points on licence >

Slide ‹#› out of 88

Exercise
How might we express the following as formal norms using the representation in the previous slide?
Choose appropriate labels and proposition names for the elements.
If anyone sees a fire in the building, they should sound the fire alarm.
No-one except trained firefighters is allowed in the building while the fire alarm is ringing.
When the alarm stops ringing, you may enter the building.
Failure to comply with the rules above risks a £1000 fine.

Slide ‹#› out of 88

Reasoning with norms
Knowledge of norms affects agent reasoning about themselves.
If you’re caught violating an enforced norm, this will lead to a punishment (decrease in utility).
Violating a convention will not cause direct punishment, but may lead to less utility from hindering coordination with others
A norm may create an obligation goal.
In planning, will my plan violate any norms?

When reasoning about other agents, I can expect they are more likely to follow norms than not.
But agents may choose to violate norms, because they are autonomous and heterogeneous, which I may need to account for in my planning.

Slide ‹#› out of 88

Example reasons for violating a norm
Why might a driver disobey the speed limit?

Slide ‹#› out of 88

Generalised reasons for violating a norm
Why might an agent not comply with a prescriptive norm?
Ignorance
Lack of understanding
Punishment is too low
Probability of getting caught is too low
Another norm may be more important (salience)
Another goal may be more important
May disagree with purpose of norm
There may be no choice

Slide ‹#› out of 88

Normative agent systems
Challenges for prescriptive normative agent systems.
Agents need to know the norms.
Agents need to understand the norms.
Monitoring needs to be effective.
Punishments need to be suitable.

Slide ‹#› out of 88

Conventions as emergent behaviour
Conventions are norms that typically emerge over time due to agents copying each others’ behaviour
Subsets of agents may produce different incompatible conventions, e.g. different driving rules, different restaurant tipping expectations, so a convention may not cover the whole system
Influence maximisation, as discussed last lecture, aims to cause particular behaviour to be norms, meaning that other agents can expect that behaviour

Slide ‹#› out of 88

Trust and reputation
When using the Contract Net protocol, if an agent receives multiple bids for a task, which should it choose?
It maximises expected utility by choosing an agent with high probability of producing high quality results
It can determine whether an agent is reliable by learning from past experience: did they do a good job for me in the past? That is, do I trust them?
Or it can determine it by asking others of their past experience : did they do a good job for others in the past? That is, do they have a good reputation?

Slide ‹#› out of 88

Example trust model: FIRE
FIRE [Huynh et al. 2006] is one model of trust and reputation that agents can use to decide between those they might delegate tasks to.
An evaluating agent can use four sources of information to judge another agent’s reliability:
Direct experience with the agent
Witness (other agents’) experience with the agent
Role-based rules where the agent performs roles that set expectations, e.g. being the target of prescriptive norms
References/testimonials from other agents provided by the agent

Slide ‹#› out of 88

FIRE Example

47

Slide ‹#› out of 88

FIRE Example

48

Slide ‹#› out of 88

FIRE Example

49

Slide ‹#› out of 88

FIRE Example

50

Slide ‹#› out of 88

FIRE Example

51

Slide ‹#› out of 88

FIRE Example

52

Slide ‹#› out of 88

Assessing trust in FIRE
An agent’s trust can be assessed separately for different service terms, e.g. timeliness, result quality
A past interaction from direct experience or a witness will have a rating value for each term
For each of the four sources, the evaluating agent calculates a weighted average score for the agent
The weight used for a past interaction is based on its likely relevance to judging the agent for a new interaction at the current time
The scores from all sources are combined to get an overall trust value

Slide ‹#› out of 88

Calculating trust in FIRE
The formula for calculating each trust score is:
TK(a, b, c) = ∑r ∈ RK (a, b, c) WK(r) v(r) / ∑r ∈ RK (a, b, c) WK(r)
where TK(a, b, c) is the trust of agent a in agent b for term c based on input source K, RK is the interaction records from source K, WK(r) is the weight of record r in the calculation and v(r) is the rating of the interaction.
Each of TK, WK(r), and v(r) are a value between 0 and 1, where 0 would mean no trust and 1 would be absolute trust.

Slide ‹#› out of 88

Weighting by recency
Interactions can be weighted by recency, so that more recent interactions have higher weight than older ones.
This aims to capture the intuition that more recent interactions are more likely to reflect the agent’s current trustworthiness than older ones.
For example, the rating value from an interaction on day 20 may be weighted at 1/kth of the value from an interaction on day 21 (the day after).
Recency is a weighting in assessing each individual agent, i.e. it doesn’t give higher weight to one agent because the evaluating agent has interacted with it more recently
Recency weighting is used in FIRE and many other trust assessment algorithms

Slide ‹#› out of 88

Exercise
An agent, Ag1, determines who to delegate a task to using a simplified FIRE trust model:
only interaction trust is used to compare agents
only a single quality-of-service term is used
uses a recency function such that a rating of an interaction on one day is given twice the weight of an interaction the day before
Two agents, Ag2 and Ag3, offer to perform the task.
100 days ago, Ag1 delegated to Ag2, with rating of 0.9
2 days ago, Ag1 delegated to Ag3, with rating of 0.9
1 day ago, Ag1 delegated to Ag3, with rating of 0.6
What are Ag1’s interaction trust scores for Ag2 and Ag3 and which will Ag1 delegate to now?

Slide ‹#› out of 88

Reputation
Reputation assessment uses ratings from other agents (witnesses) instead of the evaluating agent’s own experience, but there are complications:
How much do you trust a witness to give an accurate rating?
A witness may have a different view on quality of service, so how do you translate their ratings to ones you can use?
How do you avoid colluders of the assessed agent providing positive ratings that artificially boost the reputation score (whitewashing)?

Slide ‹#› out of 88

Exploration
If an agent only relies on trust scores to decide which other agent to delegate to, they may end up always interacting with the same agent(s) because they do not build up any knowledge of alternatives
As with other forms of reinforcement learning, we can introduce an element of exploration, meaning that with some probability, the evaluator does not pick the agent with the top rating but another agent for which it has little information

Slide ‹#› out of 88

Summary
Agents in a multi-agent system need a language to communicate in before they can cooperate or coordinate
Cooperation over a task means determining which agents can do which part of a task
Coordination avoids negative interdependencies and takes advantage of positive interdependencies to maximise social utility
Norms and past experience can set expectations that can be used to improve coordination
In the next few weeks, we look at coordination and agreement in domains where agents are more competitive than cooperative

Slide ‹#› out of 88

/docProps/thumbnail.jpeg

Leave a Reply

Your email address will not be published. Required fields are marked *