CS代考 Lecture 1: Introduction – cscodehelp代写

Lecture 1: Introduction

Introduction
Agents and Multi-Agent Systems

1

Basics
Lecturer: Dr. plus Prof. (1 class)
Time: Tuesdays 900 – 1200 weekly except reading week
Classes: Each is a mix of lecture and tutorial
Materials: Slides, useful links, etc. all on KEATS (soon)
Assessment: 100% multiple choice exam in January
Questions: Use the KEATS discussion forum not email

Slide ‹#› out of 67

2

Tutorials
Tutorials are intermingled into lectures to ensure you test your understanding as you learn
Classes include open discussions, shown on “Discussion” slides, and worked exercises, shown on “Exercise” slides
Sample solutions to the worked exercises for a class will be uploaded to KEATS after the class as a downloadable sheet alongside the lecture slides
I will upload thoughts on the open discussions to the KEATS discussion forum after each class, for you to add and respond to if you wish
The above does not apply to Prof Luck’s class

Slide ‹#› out of 67

Overall module aims
We will look at theories and methods that are used to:
Create intelligent agents embedded in environments that are capable of independent, autonomous action in order to successfully achieve the goals that we delegate to them, and
allow a system of multiple agents to interact effectively and cooperate where necessary to achieve their individual goals, particularly when each agent is self-interested.

Slide ‹#› out of 67

Content
Autonomous agents (Lectures 1 – 4)
What is an intelligent agent
Defining how agents reason about their environment
Specifying how agents behave in their environment
Engineering autonomous agents
Multi-agent systems (Lectures 5 – 10)
Simulating social phenomena
Coordinating to productively work together
Finding mutual agreement on tasks and resources
Arguing about evidence to reach justified conclusions

Slide ‹#› out of 67

Books
This book (see right) is most relevant to the module.
Several copies in the library, also free electronic access to the first edition.
It is not necessary to buy it, all the material you need will be covered in the classes.
Other relevant sources:
G. Weiss (ed.), Multiagent Systems
S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach

http://www.cs.ox.ac.uk/people/michael.wooldridge/pubs/imas/IMAS2e.html

Slide ‹#› out of 67

Department expectations of behaviour
Staff and students are expected to behave respectfully to one another – during lectures, outside of lectures and when communicating online or through email.
We won’t tolerate inappropriate or demeaning comments related to gender, gender identity and expression, sexual orientation, disability, physical appearance, race, religion, age, or any other personal characteristic.
If you witness or experience any behaviour you are concerned about, please speak to someone about it. This could be one of your lecturers, your personal tutor, a programme administrator, the diversity & inclusion co-chairs ( -Rahman and ) at a trained harassment advisor or any member of staff you feel comfortable talking about it to.

Slide ‹#› out of 67

What is an agent?
Agents are autonomous (decide for themselves what to do) and typically self-interested (act so as to try to achieve their goals).
We’re interested in agents that are intelligent (can work out the best way to act in a complex, unpredictable environment) and that interact with other agents.
Agents can be:
Software programs
Robots (including autonomous vehicles)
People or other organisms

Slide ‹#› out of 67

8

Discussion

Why might these robots need some autonomy?
What does the Mars rover need to reason about to be able to autonomously to travel to, collect, and analyse a sample?
How might their environments disrupt their plans?
Rover

Landmine clearance robot

Slide ‹#› out of 67

What is an agent?
A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic, open, unpredictable and typically multi-agent domains.

Slide ‹#› out of 67

10

What is an agent?
A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic, open, unpredictable and typically multi-agent domains.

Control over internal state and over own behaviour.

Autonomous agents?
Thermostat
UNIX Daemon (e.g. biff)

Slide ‹#› out of 67

11

What is an agent?
A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic, open, unpredictable and typically multi-agent domains.

Reactive: respond in timely fashion to environmental change.
Proactive: act in anticipation of future goals.

Slide ‹#› out of 67

12

Reactivity
If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure – program just executes blindly.
Example of fixed environment: compiler
Real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic.
Software is hard to build for dynamic domains: program must take into account possibility of failure.
A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful).

Slide ‹#› out of 67

13

Proactiveness
Reacting to an environment can be easy (e.g., stimulus → response rules)
But we generally want agents to do things for us. Hence goal directed behaviour.
Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative.
For dynamic environment, this is hard!

Slide ‹#› out of 67

14

Balancing reactive and goal-oriented behaviour
We want our agents to be reactive, responding to changing conditions in an appropriate (timely) fashion.
We want our agents to systematically work towards long-term goals.
These two considerations can be at odds with one another.
Designing an agent that can effectively balance the two remains an open research problem.

Slide ‹#› out of 67

15

Discussion
What are the goals of an autonomous vehicle in a convoy?
What might it have to react to at short notice?
How we might express a vehicle’s rules for following the convoy while retaining its autonomy to react to situations?

Slide ‹#› out of 67

What is an agent?
A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic, open, unpredictable and typically multi-agent domains.

The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account.
Some goals can only be achieved with the cooperation of others.
Similarly for many computer environments: witness the Internet.
Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others.

Slide ‹#› out of 67

17

What is an agent?
A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic, open, unpredictable and typically multi-agent domains.

Experiences environment through sensors and acts through effectors.

A situated agent is embedded in its environment, meaning that it affects and is affected by that environment, and its capability and intelligence is defined in relation to its environment.

Slide ‹#› out of 67

18

What is an agent?
A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic, open, unpredictable and typically multi-agent domains.

Agents have control over the own state and behaviour (autonomous).
Agents balance reactive and proactive behaviour (flexible).
Agents have social abilities to interact with other agents (multi-agent).
Agents perceive and act on their environment (situated).

Slide ‹#› out of 67

19

Are agents just objects by another name?
An object
encapsulates some state
communicates via message passing or invoking methods
has methods, corresponding to operations that may be performed on this state
20

Agents & Objects

Slide ‹#› out of 67

20

Main differences:
agents are autonomous:
agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent
agents are smart:
capable of flexible (reactive, pro-active, social) behaviour, and the standard object model has nothing to say about such types of behaviour
agents are active:
continuous sense-decide-act loop, not passive service providers
21

Agents & Objects

Slide ‹#› out of 67

21

Objects do it for free…

agents do it because they want to
agents do it for money
22

Agents & Objects

Slide ‹#› out of 67

22

Discussion
Homes with electricity generating infrastructure can feed power into the network to help meet demand
An up-front sale price can be fixed with a company for a year
Why might we instead want an autonomous software agent to sell the power in smaller batches on our behalf?
What strategies might an agent use to sell for the highest price possible?

Slide ‹#› out of 67

When explaining human activity, it is often useful to make statements such as the following:
Janine took her umbrella because she believed it was going to rain.
Michael worked hard because he wanted to possess a PhD.

These statements make use of a folk psychology, by which human behaviour is predicted and explained through the attribution of attitudes, such as believing and wanting (as in the above examples), hoping, fearing, and so on
The attitudes employed in such folk psychological descriptions are called the intentional notions
24

Agents as Intentional Systems

Slide ‹#› out of 67

24

The philosopher coined the term intentional system to describe entities ‘whose behavior can be predicted by the method of attributing belief, desires and rational acumen’
25
Image: Viktorbuehler (CC BY-SA 3.0 license)

Agents as Intentional Systems

Slide ‹#› out of 67

25

Is it legitimate or useful to attribute beliefs, desires, and so on, to computer systems?
26

Agents as Intentional Systems

Slide ‹#› out of 67

26

McCarthy argued that there are occasions when the intentional stance is appropriate:
27
‘To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behavior, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them.’
J. McCarthy. Ascribing mental qualities to machines. Technical report, Stanford University AI Lab., 1978.

Agents as Intentional Systems

Slide ‹#› out of 67

27

What objects can be described by the intentional stance?
As it turns out, more or less anything can. . . consider a light switch:
‘It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires’.

But most adults would find such a description absurd!
Why is this?

28

Agents as Intentional Systems
Y. Shoham. Agent oriented programming. Technical report, Stanford Computer Science Dept., 1990.

Slide ‹#› out of 67

28

The answer seems to be that while the intentional stance description is consistent,
. . . it does not buy us anything, since we essentially understand the mechanism sufficiently to have a simpler, mechanistic description of its behavior. [Shoham, 1990]

Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behaviour
But with very complex systems, a mechanistic explanation of its behaviour may not be practicable

As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation — low level explanations become impractical. The intentional stance is such an abstraction

29

Agents as Intentional Systems
Y. Shoham. Agent oriented programming. Technical report, Stanford Computer Science Dept., 1990.

Slide ‹#› out of 67

29

Dennett identifies different ‘grades’ of intentional system:

‘A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. …A second-order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) — both those of others and its own’

30

Agents as Intentional Systems

Slide ‹#› out of 67

30

This intentional stance is an abstraction tool — a convenient way of talking about complex systems, which allows us to predict and explain their behaviour without having to understand how the mechanism actually works

Now, much of computer science is concerned with looking for abstraction mechanisms (witness procedural abstraction, ADTs, objects,…)
So why not use the intentional stance as an abstraction tool in computing — to explain, understand, and, crucially, program computer systems?

Characterising agents as intentional systems provides us with a familiar, non-technical way of understanding & explaining agents

31

Agents as Intentional Systems

Slide ‹#› out of 67

31

How did we get to agents
Ongoing trends in computing:
Ubiquity
Interconnection
Intelligence
Delegation
Human-orientation

Slide ‹#› out of 67

32

Ubiquity
The continual reduction in cost of computing capability has made it possible to introduce processing power into places and devices that would have once been uneconomic
Internet of Things, Smart Cities.

Slide ‹#› out of 67

33

Interconnection
Computer systems today no longer stand alone, but are networked into large distributed systems.
Since distributed and concurrent systems have become the norm, it is natural to think of theoretical models that portray computing as primarily a process of interaction

Slide ‹#› out of 67

34

Intelligence
The complexity of tasks that we are capable of automating and delegating to computers has grown steadily.

Photo credit: Jaguar MENA

Photo credit:
RoboCup 2019: https://youtu.be/_PC-V5GJP6Q

Slide ‹#› out of 67

35

Delegation
Computers are doing more for us – without our intervention.
We are giving control to computers, even in safety critical tasks.

Photo credit: Jaguar MENA

Photo credit: AXEL KRIEGER

Slide ‹#› out of 67

2016: robot carried out fully autonomous surgery on a live pig
36

Human Orientation
The movement away from machine-oriented views of programming toward concepts and metaphors that more closely reflect the way we ourselves understand the world.
Programmers and users relate to the machine differently.
Programmers conceptualize and implement software in terms of higher-level – more human-oriented – abstractions.

Slide ‹#› out of 67

37

Human Orientation
The Electronic Numerical Integrator And Computer, built in 1943-45, http://www.columbia.edu/cu/computinghistory/eniac.html.

Slide ‹#› out of 67

38

Human Orientation
Programming has progressed through:
machine code;
assembly language;
machine-independent programming languages;
sub-routines;
procedures & functions;
abstract data types;
objects;
to agents.

Higher levels of abstraction
Using agent based methods “can improve overall developer productivity by an average 350%. For java coding alone, the increase in productivity was over 500%.” Making a Strong Business Case for Multiagent Technology, S. Benfield, Invited Talk at AAMAS 2006

Slide ‹#› out of 67

39

Where do these trends bring us?
Delegation and Intelligence imply the need to build computer systems that can act effectively on our behalf. This implies:
The ability of computer systems to act independently.
The ability of computer systems to act in a way that represents our best interests while interacting with other humans or systems.
Interconnection and Distribution, coupled with the need for systems to represent our best interests, implies systems that can cooperate and reach agreements with other systems that have different interests (much as we do with other people).
All of these trends have led to the emergence of a relatively new field in Computer Science: multi-agent systems.

Slide ‹#› out of 67

40

Applications of agent technology
“A key air-traffic control system…suddenly fails, leaving flights in the vicinity of the airport with no air-traffic control support. Fortunately, autonomous air-traffic control systems in nearby airports recognize the failure of their peer, and cooperate to track and deal with all affected flights.”
Systems taking the initiative when necessary.
Agents cooperate to solve problems beyond the capabilities of any individual agent.

Slide ‹#› out of 67

41

Applications of agent technology
You are editing a file, when your Personal Digital Assistant (PDA) requests your attention: an email message has arrived, that contains notification about a paper you sent to an important conference, and the PDA correctly predicted that you would want to see it as soon as possible. The paper has been accepted, and without prompting, the PDA begins to look into travel arrangements. A short time later, you are presented with a summary of the cheapest and most convenient travel options.
How do you state your preferences to your agent?
How can your agent compare different deals from different vendors? What if there are many different parameters?
What algorithms can your agent use to negotiate with other agents (to make sure you get a good deal)?

Slide ‹#› out of 67

42

Applications of agent technology

Slide ‹#› out of 67

43

Applications of agent technology

Slide ‹#› out of 67

44

Applications of agent technology

Slide ‹#› out of 67

45

Agents and multi-agent systems
An agent is a computer system that is capable of independent action on behalf of its user or owner (figuring out what needs to be done to satisfy design objectives, rather than constantly being told).
A multi-agent system is one that consists of a number of agents, which interact with one-another.
In the most general case, agents will be acting on behalf of users with different goals and motivations.
To successfully interact, they will require the ability to cooperate, coordinate, and negotiate with each other, much as people do.

Slide ‹#› out of 67

46

Agents and multi-agent systems
Two key problems:
How do we build agents capable of independent, autonomous action, so that they can successfully carry out tasks we delegate to them?
How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out those delegated tasks, especially when the other agents cannot be assumed to share the same interests/goals?

Slide ‹#› out of 67

47

Multi-agent systems
In multi-agent systems, we address questions such as:
How can cooperation emerge in societies of self-interested agents?
How can agents communicate with one another?
How can self-interested agents recognise conflict, and how can they (nevertheless) reach agreement?
How can autonomous agents coordinate their activities so as to cooperatively achieve goals?
While these questions are all addressed in part by other disciplines (notably economics and social sciences), what makes the multi-agent systems field unique is that it emphasises that the agents in question are computational, information processing entities.

Slide ‹#› out of 67

48

Influencing disciplines
The field of multi-agent systems is influenced and inspired by many other fields:
Economics
Philosophy
Game Theory
Logic
Ecology
Social Sciences

This can be both a strength (infusing well-founded methodologies into the field) and a weakness (there are many different views as to what the field is about)

Slide ‹#› out of 67

49

Discussion
How could a drone help people in difficulty at sea?
Why might the drone benefit from some autonomy?
Why might multiple drones be better than one?
What coordination is needed between the drones?
How could drones reason and communicate to achieve this coordination?

BBC News: UK
coastguard plans
drone rescue trial in
south-west England
(5 August 2019)

Slide ‹#› out of 67

Two views of the field
Agents as a paradigm for software engineering:
Software engineers have derived a progressively better understanding of the characteristics of complexity in software. It is now widely recognized that interaction is probably the most important single characteristic of complex software

Slide ‹#› out of 67

51

Two views of the field
Agents as a tool for understanding human societies:
Multi-agent systems provide a tool for simulating societies, which may help shed some light on various kinds of social processes.

For representing complex and dynamic real world environments.
e.g. simulation of economies, societies and biological environments.

To provide answers to complex physical or social problems.
e.g. modelling of the impact of climate change on biological populations, modelling impact of public policy options on social or economic behaviour.

Slide ‹#› out of 67

52

Computation and people in multi-agent systems
Another way to look at this is to distinguish what role the multi-agent system plays in relation to people and computers
The multi-agent system could be the computational system achieving some goals for people (users)
The multi-agent computer system could be a simulation of a multi-agent human system to help investigate that human system
The multi-agent system could be a group of interacting people, supported by a computational mechanism by which they reach agreement

Slide ‹#› out of 67

Objections to multi-agent systems?
Isn’t it all just Distributed/Concurrent Systems?

There is much to learn from this community, but:
Agents are assumed to be autonomous, capable of making independent decisions – so they need mechanisms to synchronize and coordinate their activities at run time.
Agents are (can be) self-interested, so their interactions are “economic” encounters.

Slide ‹#› out of 67

54

Objections to multi-agent systems?
Isn’t it all just AI?

We don’t need to solve all the problems of artificial intelligence (i.e., all the components of intelligence) in order to build really useful agents.
Classical AI ignored social aspects of agency. These are important parts of intelligent activity in real-world settings.

Slide ‹#› out of 67

55

Objections to multi-agent systems?
Isn’t it all just Economics/Game Theory?

These fields also have a lot to teach us in multiagent systems, but:
Insofar as game theory provides descriptive concepts, it doesn’t always tell us how to compute solutions; we’re concerned with computational, resource-bounded agents.
Some assumptions in economics/game theory (such as a rational agent) may not be valid or useful in building artificial agents.

Slide ‹#› out of 67

56

Objections to multi-agent systems?
Isn’t it all just Social Science?

We can draw insights from the study of human societies, but there is no particular reason to believe that artificial societies will be constructed in the same way.
Again, we have inspiration and cross-fertilization, but hardly subsumption.

Slide ‹#› out of 67

57

Discussion
There is demand for public electric vehicle charging points to be installed, but each installation costs
People prefer charging points to be nearer their own homes, workplaces, etc.
How might a city distribute an affordable amount of charging points that best meets residents’ demands?

Slide ‹#› out of 67

58

Module overview

Practical reasoning
Class next week taught by Prof
If an agent intends to do something, how does it then work out how to do it?
If an agent comes up with a plan to achieve its intentions, how does it react to changes in the world that may disrupt that plan?
To know that an agent will perform its function correctly it may commit to achieving some intention, but what if it can’t meet its commitment?
What is the internal architecture of a software or hardware agent able to reason about its intentions and how to fulfil them?

Slide ‹#› out of 67

60

Embedded agents
An agent is embedded in an environment, whether physical or virtual, and acts in that environment
What are the characteristics of these environments, and how do these affect what an agent can achieve?
Deterministic vs non-deterministic, dynamic vs static, discrete vs continuous…
How do we formally describe the behaviour of the agent in its interaction with its environment?
How does an agent determine what success looks like so it can autonomously decide how to behave?

Slide ‹#› out of 67

Agent architectures
Going further into what is required in the design of an agent to make it useful and intelligent.

How can we retain an agent’s autonomy when designing rules for its behaviour?
Can we take inspiration from nature to design agents that are more robust and easier to create?

Slide ‹#› out of 67

Agent-based modelling
We look at how multi-agent systems can be used to simulate societies and other complex phenomena in order to understand and explore them
What is simulation and what is its value?
What is distinctive about an agent-based model as a simulation?
We look at how to code simple agent-based models using the NetLogo platform
How does an agent-based model allow us to test an intervention (potential change) in simulation?
What is the methodology for creating a robust ABM?

Slide ‹#› out of 67

Coordination
A multi-agent system allows for potential benefit through agents pooling their abilities and resources, but this only works if they coordinate
How can an agent with a problem to solve distribute this among a set of agents for improved performance?
By what protocol could any agent delegate to any other agents flexibly across a large multi-agent system?
If agents each have partial information about their parts of the environment, how could they coordinate to achieve tasks requiring global information?
Can making agents predictable to other agents in the normal case aid cooperation with less communication?

Slide ‹#› out of 67

Negotiation
Agents do not have to be cooperative, they may be competitive and have conflicting goals. We next consider how they may come to mutually acceptable agreements through negotiation.
If all agents behave individually rationally in negotiating, do we necessarily get the best outcome?
We need agreement mechanisms through which agents coordinate: what do we want from such a mechanism?
What strategies can agents use to negotiate and what are the ultimate outcomes for the system?

Slide ‹#› out of 67

Social choice
A centralised mechanism, such as a vote, could be used to get agents to reach agreement on a group decision, but there are many ways this could work.
In what ways can we reconcile the varying preferences of agents in a group decision?
How can agents’ preferences or votes be combined to come to a consensus?
What properties are desirable for a group decision mechanism and are they achievable?

Slide ‹#› out of 67

Auctions
A particular kind of agreement between agents is the allocation of resources, and an auction is a general mechanism by which this can be done.
What forms of auction are possible?
What potential benefits do different kinds of auction have for the participants and the auctioneer?
How can we auction combinations of multiple connected resources?

Slide ‹#› out of 67

Argumentation
Multi-agent reasoning can expose the rationality and consequences of a position under discussion where a complex set of interrelated facts are under consideration.
How can an argument between agents be represented within a computer system?
How can a consistent set of facts be determined from the arguments presented?
How can agents reason about knowledge that is part of an open-ended discussion and may be refuted at some later point?
How can argumentation be used to help agents reach agreement?

Slide ‹#› out of 67

Any questions?
See you in two weeks!

Department of
Informatics

Robotics
Computer Science

Telecommunications

Department of
Informatics

Robotics
Computer Science

Telecommunications

Department of
Informatics

Robotics
Computer Science

Telecommunications

/docProps/thumbnail.jpeg

Leave a Reply

Your email address will not be published. Required fields are marked *