Monthly Archives: April 2022

CS代考 SWEN90010 – High Integrity – cscodehelp代写

SWEN90010 – High Integrity
Systems Engineering
Alloy Example and Trace-Based Modelling (Capability-Based Access Control)

Copyright By cscodehelp代写 加微信 cscodehelp

DMD 8.17 (Level 8, Doug McDonell Bldg)
http://people.eng.unimelb.edu.au/tobym @tobycmurray

ACCESS CONTROL

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Access Control
who can access what in which ways
the “who” are called subjects e.g. users, processes etc.
the “what” are called objects
e.g. individual files, sockets, processes etc.
includes all subjects
the “ways” are called permissions
e.g. read, write, execute etc.
are usually specific to each kind of object
include those meta-permissions that allow modification of the
protection state

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
AC Mechanisms and Policies
Specifies allowed accesses
And how these can change over time AC Mechanism
Implements the policy
Certain mechanisms lend themselves to certain kinds of policies
Certain policies cannot be expressed using certain mechanisms

Protection State
Access control matrix defines the protection state at any instant in time
Copyright University of Melbourne 2016, provided under Creative Commons Attribution License

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Storing Protection State
Not usually as access control matrix too sparse, inefficient
Two obvious choices:
store individual columns with each object
defines the subjects that can access each object
each such is called the subject’s capability list
each such column is called the object’s access control list defines the objects each subject can access
store individual rows with each subject

Access Control Lists (ACLs)
Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Subjects usually aggregated into classes e.g. UNIX: owner, group, everyone
Meta-permissions (e.g. own) control class membership
allow modifying the ACL
Implemented in almost all commercial OSes

Capabilities
A capability is a capability list element
Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Names an object to which the capability refers Confers permissions over that object
Less common in commercial systems More common in research though

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Capabilities: Implementations
Capabilities must be unforgeable On conventional hardware, either:
Stored as ordinary user-level data, but unguessable due to sparseness like a password or an encryption key
like UNIX file descriptors
Sparse capabilities can be leaked more easily, but are easier to revoke
The only solution for most distributed systems
Stored separately (in-kernel), referred to by user programs by index/

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Mandatory vs. Discretionary AC
Discretionary Access Control:
Users can make access control decisions delegate their access to other users etc.
Mandatory Access Control (MAC): enforcement of administrator-defined policy
users cannot make access control decisions (except those allowed by
mandatory policy)
can prevent untrusted applications running with user’s privileges from
causing damage

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Common in areas with global security requirements e.g. national security classifications
Less useful for general-purpose settings: hard to support different kinds of policies
all policy changes must go through sysadmin
hard to dynamically delegate only specific rights required at runtime

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
Bell-LaPadula (BLP) Model
MAC Policy/Mechanism
Formalises National Security Classifications
Every object assigned a classification e.g. TS, S, C, U
Classifications ordered in a lattice e.g. TS > S > C > U
Every subject assigned a clearance
Highest classification they’re allowed to learn

Copyright University of Melbourne 2016, provided under Creative Commons Attribution License
BLP: Rules
Simple Security Property (“no read up”):
s can read o iff clearance(s) >= class(o)
S-cleared subject can read U,C,S but not TS standard confidentiality
*-Property (“no write down”):
s can write o iff clearance(s) <= class(o) S-cleared subject can write TS,S, but not C,U to prevent accidental or malicious leakage of data to lower levels Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Boebert’s Attack Boebert 1984: “On the Inability of an Unmodified Capability Machine to Enforce the *-Property“ Shows an attack on sparse capability systems that violates the *-property Where caps and data are indistinguishable Does not work against partitioned capability systems Practically all capability-based kernels LET’S MODEL THIS IN ALLOY Initial Conditions for Attack Low RW LoSeg rw_l 16 Copyright University of Melbourne 2016, provided under Creative Commons Attribution License What we need to model Data Capabilities: each points to an object and carries certain access rights (permissions) Objects: things capabilities point to; each has a classification (High or Low) Actors (i.e. programs): that can use capabilities that they possess Memory Segments: which store Data 17 Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Boebert’s Attack r_l.read() High R rw_l.write(rw_l) RW r_l R Low RW LoSeg rw_l rw_l Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Low writes his cap into the low segment from which High reads it out Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Boebert’s Attack: Lessons Not all mechanisms suited to all policies Many policies treat data- and access-propagation differently BLP is one example Cannot be expressed using sparse capability systems This does not mean that capability systems and MAC are incompatible in TRACE-BASED MODELLING IN DETAIL Trace-Based Modelling Can’t just consider individual state transitions: We to talk about a sequence (aka a trace) of such transitions. 21 Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Trace (Sequence of States) s1 s2 s3 s4... There is a state transition between each pair of states in the sequence How do we define this in Alloy? 22 Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Traces in Alloy Alloy 6 natively reasons about traces The models (behaviours) that Alloy considers (e.g. when doing check and run) are lasso traces 23 Copyright University of Melbourne 2016, provided under Creative Commons Attribution License Defining Traces We need to give these traces meaning Constrain the state transitions:
 fact trans { always all s: State | op1[s] or op2[s] or ... or unchanged[s] } (Often, also) constrain the first state:
 fact init { all s: State | init[s] } 24 Copyright University of Melbourne 2016, provided under Creative Commons Attribution License 程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 Philosophy & Ethics – cscodehelp代写

Philosophy & Ethics
Module 3: Utilitarianism and Deontology

Copyright By cscodehelp代写 加微信 cscodehelp

This material has been reproduced and communicated to you by or on behalf of the University of Melbourne in accordance with section 113P of the Copyright Act 1968 (Act).
The material in this communication may be subject to copyright under the Act.
Any further reproduction or communication of this material by you may be the subject of copyright protection under the Act.
Do not remove this notice

Learning outcomes
At the end of this module, you should be able to:
• Explain what ethics is and understand some basic features of ethical thinking
• Describe the ethical theories of utilitarianism and deontology
• Begin to apply these ethical theories to a case study involving AI

Power of AI
• Potential for great good
• Potential for great harm
Digital ethics team

What are the ethics of using AI decision-making to:
• Determine if someone goes to jail?
• Help in allocation of police?
• Write your essays for you?
• Make medical diagnoses?
• Exceed human intelligence?

Ethics and religion
• Buddhism, Montheism (Ch, Islam, Jud), Confucianism, Hinduism, Jainism, Shintoism, African, Dreamtime etc.
• Source of moral authority
• Christians: loving God and each other
• Buddhists: universal compassion, nirvana
• Confucianism: respect for parents and ancestors

What is ethics?
• Socrates: How should one live?
• : “Ethics is the enquiry into what is valuable, or into what is really important, or into the meaning of life, or into what makes life worth living, or into the right way of living.
Ethics about values, morals, good and bad, right and wrong
1. How should I act? What standards of behaviour should I adopt?
2. What sort of person should I be?
3. What sort of professional computer scientist should I be?
Normative versus descriptive

Ancient and applied ethics – west
Modern applied ethics – e.g.:
• Is it acceptable to end the lives of suffering people?
• WhenisOKornotOKtogotowar?
• Is it ever justified for a doctor to lie to a
• Can we favor our family over strangers in desperate need?
• Is it right to eat nonhuman animals?
• Do we have obligations to unborn generations?
Ancient Greece: Ethics as rational activity

Self and others
• Nihilism: no standards
• But: self-centred
• Egoism: ‘I ought to do only does the most good for me’
• Ethics is about respecting or caring for others as well as self
• Ethics and human life

Relativism
• No universal right and wrong
• Cultural R: Right/wrong just means
what a culture believes is right/wrong
• Subjectivism – individual
• E.g. human rights
• Problem: Does right/wrong mean those things?
• People can disagree with other people, cultures
• Give reasons: strong or weak (not just ‘gut feelings’)
• So: relativist doesn’t prove no universal rights and wrongs

Moral machine

• Old person vs. young person
• Man vs. pregnant woman
• Famous cancer scientist vs. homeless person
• Intelligent animal vs. serial killer

Trolley problem: What would you do and why?

While ethics is a rational process and potentially universal:
• Ethical answers not always clear cut
• Disagree with each other
• Respect other perspectives and diverse insights
• Socrates: We can test our ethical beliefs through open-minded reflection and dialogue
• e.g. in tutorials!
This Photo by Unknown Author is licensed under CC BY

Why should I be ethical?
• Society expects it
• Successful, co-operative team player
• To gain respect
• Inner peace, avoid guilt
• Just because it is right
• Socrates: “Unexamined life is not worth living for human beings”
• “Better to suffer evil than to do it”

Principles in AI ethics
Fairness Safety Accountability Transparency Benefit Explainability Privacy

Ethical theory
Utilitarianism – consequence based
Deontology – rule based
Virtue ethics – character based
Ethics of care – relationship based

Utilitarianism
Religious ideas dominated pre-Enlightenment Christianity strict prohibitions and looked to
happiness in the next world Revolutionary
Against abstract rules “written in the heavens”
Progressive – social change e.g. more liberty Not opposed to pleasure
Partial return to Greek philosophers (Socrates, Aristotle, Plato) – reason

Bentham (1748—1832)

Utilitarianism
Consequentialism: consequences alone determine action’s rightness
What consequences?
Greatest-Happiness Principle: “actions are
right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” ( Mill 1806-1873)
“Greatest happiness of the greatest number of people”
Principle of utility: Right = maximise total net happiness/wellbeing

Mill (1806-1873)

• Teleological theory: ends (good) determines the right
• Utility: value to individuals
• Interests: Harms and benefits
• Psychological, physical, emotional, social, economic, spiritual
• Benefits (goods) = positive utility
• Harms (bads) = negative utility
• Maximise utility = increase benefits, decrease harms

Hedonism: Bentham vs Mill
Utility: intrinsically good and bad for us (not just instrumentally good e.g. money, exercise)
Hedonism: “Two masters”: pleasure & pain Bentham: Pain/suffering is bad
Intensity, duration
All pleasures are equal
“Pushpin as good as poetry”
Mill: higher and lower pleasures Poetry better than pushpin Socrates unsafisfied>satisfied fool

Preference utilitarianism

Intrinsic good = satisfied preferences Intrinsic bad = thwarted preferences Happiness = overall balance Preference calculus

Best overall state of affairs
Net (not gross) pleasure/pref
All pleasure/pain matters equally
In that sense: all individuals are equal (including you)
Consider ALL the consequences Good and bad
Near and far
Probability
Calculate best outcome
Hedonic calculus/preference calculus

Surely consequences matter Surely happiness/wellbeing matter
If they matter, isn’t more happiness better than less?
Simple, clear decision procedure: Principle of Utility
Rational (cf. accepting authority)
Equality: all count for 1: no class, race, gender, intelligence, etc. favouritism

Trolley problem – utilitarianism

Deontology
Non-teleological (ends) Deontic = duty, obligation Rules Non-consequentialist
We learn rules and principles from childhood
These capture best what morality is about
Can refine and alter rules via reflection and argument
Keep promises
Don’t steal
Be honest, don’t deceive
Be just and fair
Repay kindnesses
Avoid doing harm
Look after your friends and family
Don’t deliberately harm/kill the innocent

Trolley problem – deontology

D attacks U
For some deontologists: Consequences can matter – e.g. generosity requires calculating benefit. But – more to ethics than calculating consequences!
Maximising ethic too demanding – give up much of own wellbeing for strangers. Singer – give up high percentage of income.
Evil-doing – pushing the large man
Not as helpful a decision-making procedure as U think – difficult to impossible calculation
Fairness – although each person’s similar interests counts equally, maximizing wellbeing can cause apparent injustice

Angry mob example

Murder in town
Townsfolk want justice Captured a suspect – a homeless loner As sheriff, you know suspect is innocent But if man not hanged – rampage! What would a U do?
What would a D do?

Prima facie vs. absolute rules
Prima facie rules: ‘on the face of it’
• Rule applies presumptively
• Rules can conflict: need judgement to resolve (e.g. break promise to save a life)
• Some rules win out, others are overridden
Absolute rules: unconditional
• Don’t have exceptions
• Don’t yield to other rules
• Greatest protagonist: German philosopher (1724-1804)

Kant’s ethics
A special kind of deontology
Absolute duties
Despised “the serpent-windings” of U
Aiming to produce good consequences ≠ right
Right = a “good will”; acting for right reasons; acting for duty’s sake
Morality is rational
But rationality is opposed to consequentialism

(1724-1804)

Categorical imperative – first
• Actions must be universalisable
• We act on rules in ethics
• But moral rules don’t just apply to you
• Can’t make exceptions for ourselves
• “Act only according to that maxim (rule) whereby you can at the same time will that it should become a universal law”
• E.g.: Rule: it’s OK for me to lie
• This means: it’s OK for anyone to lie
• But if everyone lies when it suits them: truth collapses > lying becomes impossible
• Hence: lying is irrational
• Same for promise-breaking

Categorical imperative – second
Second part of Moral Law
Connected to the first
All rational beings can grasp the moral law They have autonomy
“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end”

Ends and merely means
• All autonomous beings are ends-in- themselves
• Sublime and equal dignity (infinite worth)
• Never treat them as merely means
• Hire a plumber – use as means
• But mere means and not ends-in- themselves: e.g., deceiving, breaking promises, manipulating
• Never allowed
• E.g. murderer asking if you have hidden
his intended victim – may you lie?
• Autonomy must be respected

Modern notion of autonomy
Autonomy is the ability to think for ourselves, plan our lives, act on our values
To respect autonomy, need people’s informed consent (e.g. collecting private information about them)
We should aim to:
• Respect the autonomy of others
• Try to understand people’s values and beliefs and get their consent
• Respect control over personal information
• Remember both powerful and weak have autonomy
• Be honest with people, including when things go wrong (deceiving people disrespects their autonomy)
Digital ethics team 44

U and rules
• U: consequences matter more than rules
• But: rules matter if they affect consequences!
• E.g. Social rule against punishing innocent – good U rule?
• Some rules, laws, basic rights are important (e.g. don’t kill, don’t torture etc.)
• But: must be changed if not best consequences!
• U: ‘Morality made for people, not people for morality’

AI headbands
Morally justified or not?

Wall Street Journal video (https://www.wsj.com/articles/chinas-efforts-to-lead-the-way-in-ai-start-in-its-classrooms-11571958181)

AI headbands example
• Study in selected classrooms to collect data, train AI, and improve the headbands
• Uses electroencephalography (EEG) sensors to measure brain signals and AI algorithm to translate signals into real-time focus levels
• Colours displayed on band
• Also designed to help students focus through
neurofeedback
• Results of classrooms with and without compared
• Data from students kept on company server
• Compulsory and students and parents not told about details of the study
• What might U and Kant say about this?

Nature of ethics Religion
Egoism, relativism Moral reason Utilitarianism Deontology and to apply to AI

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

程序代写 COMP90087: The Ethics of AI for 2022 S1 – cscodehelp代写

https://www.unimelb.edu.au/dates
(from 28 Feb)
(from 7 March )
(from 14 March)

Copyright By cscodehelp代写 加微信 cscodehelp

(from 21 March)
(from 28 March)
(from 4 April)
(from 11 April)
(from 18 April)
(from 25 April)
(from 2 May)
(from 9 May)
(from 16 May)
(from 23 May)
30 May-3 June
6 June- 24 June

Easter non-teaching week
Exam Period

Topic: lecturer

(lectures in -200 Rivett theatre and Zoom)

Trust, power, & machines:

History of AI:

Theories – utilitarianism & deontology:

Theories – virtue ethics & care ethics:
Fairness, Equity & Accountability:

Explainability:
Transparency:

Algorithmic bias bias & Accessibility:

Data governeance:

AI and care:

Law, frameworks & human rights

Superintelligence & bringing it together:

Assessment
Weekly tutorial contribution: throughout semester (20% of mark; 2% per tute)

Students may miss 1 tutorial without penalty

First essay (30% mark): due Wed midday April 13th
(35-45 hours work)

Research essay (30% mark): due Tuesday midday 12pm June 7th
(35-45 hours work)

1-hour online quiz with 15m reading time (20% mark): Wed June15

Table of events in COMP90087: The Ethics of AI for 2022 S1

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

代写代考 COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 2 – cscodehelp代写

Photo: https://www.createdigital.org.au/human-like-robot-aged-care-homes/
Week 10/S1/2022
AI in Care

Copyright By cscodehelp代写 加微信 cscodehelp

School of Computing and Information Systems The University of Melbourne
jwaycott [at] unimelb.edu.au

Learning Outcomes
1. Describe the role of AI in supporting different kinds of care, including in sensitive and complex care settings (e.g., aged care).
2. Apply care ethics frameworks (e.g., Tronto) to define and analyse care and caregiving.
3. Critique the design and use of AI for care, and discuss the possible unintended consequences of using AI in care settings.
4. Apply concepts from care ethics and value-sensitive design to discuss the future appropriate design of AI for care.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 2

Related Reading
This module has two readings about ethical issues in the design and use of robots in aged care (our main case study for exploring ethical challenges for AI in care):
, A. (2013). Designing Robots for Care: Care Centered Value-Sensitive Design. Social Engineering Ethics, 19, 407-433.
• Draws on care ethics and value sensitive design to consider how robots can be ethically designed for use in care settings and to propose a method for evaluating ethical issues arising from the use of robots in aged care.
Vandemeulebroucke, T., et al (2018). The Use of Care Robots in Aged Care: A Systematic Review of Argument-Based Ethics Literature, Archives of Gerontology and Geriatrics, 74, 15-25.
• Summarises arguments about ethical issues associated with the use of robots in aged care through a systematic literature review study.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022

Introduction – my background and current research
What is care?
Who cares?
Designing AI for care: care ethics and value sensitive design
Joan Tronto’s four phases of care
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 4
Can AI care?

Introduction
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022

About me…
Then: Bachelor of Arts (Psychology) @ UniMelb
Now: Associate Professor, Computing & Information Systems, Faculty of Engineering and Information Technology
https://findanexpert.unimelb.edu.au/profile/52243-jenny-waycott

https://cis.unimelb.edu.au/hci

2000 – 2011:
Educational technology (mobile technologies and social technologies in higher education)

Now: Emerging Technologies for Enrichment in Later Life
Photo: https://www.createdigital.org.au/human-like-robot-aged-care-homes/

Imagine you work for a robotics company whose motto is: “robots for social good”
The company is designing a companion robot to support people like Donald -> older adults who are socially isolated

DISCUSSION
• What functions should the robot perform?
• What functions should the robot NOT perform?
• Is there anything that might go wrong?
• Arethereanyissuesthe company should be concerned about?

What is care?
Image: Fang cuddling Mr Potato Head

Care: Some examples
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 14

Care: Some examples
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 Photo from https://integrisok.com/ 15

Care: Some examples
Photo by on Unsplash
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022

Care: Some examples
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 Image from: https://www.news.com.au/ 17

Care: Tronto’s definition
“Care is a common word deeply embedded in our every day language. On the most general level care connotes some kind of engagement; this point is most easily demonstrated by considering the negative claim: ‘I don’t care’”
Care carries two important aspects:
“First, care implies a reaching out to something other than the self: it is neither self-referring nor self-absorbing.
Second, care implicitly suggests that it will lead to some type of action.”
Joan Tronto (1993). Moral Boundaries: A Political Argument for an Ethic of Care, Routledge (p. 102)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 19

Care: Tronto’s definition
“On the most general level, we suggest that caring be viewed as a species activity that includes everything that we do to maintain, continue, and repair our ‘world’ so that we can live in it as well as possible. That world includes our bodies, our selves, and our environment, all of which we seek to interweave in a complex, life-sustaining web.”
Joan Tronto (1993). Moral Boundaries: A Political Argument for an Ethic of Care, Routledge (p. 103)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 20

Caring about: noticing the need for care. Requires attentiveness.
Taking care of: “assuming some responsibility for the identified need and
determining how to respond to it.”
Care giving: requires competence. The need for care has only been met if good care has been provided.
Care receiving: “we need to know what has happened, how the cared-for people or things responded to this care.”

Elements of an Ethic of Care (Tronto)
Attentiveness: “If we are not attentive to the needs of others, then we cannot possibly address those needs.” OR: Ignoring other is “a form of moral evil”
Responsibility: “Ultimately, responsibility to care might rest on a number of factors; something we did or did not do has contributed to the needs for care, and so we must care.”
Joan Tronto (1993). Moral Boundaries: A Political Argument for an Ethic of Care, Routledge (pp. 127-132)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 22

Elements of an Ethic of Care (Tronto)
Competence: “Intending to provide care, even accepting responsibility for it, but then failing to provide good care, means that in the end the need for care is not met. Sometimes care will be inadequate because the resources available to provide for care are inadequate. But short of such resource problems, how could it not be necessary that the caring work be competently performed in order to demonstrate that one cares?” (p. 133)
Responsiveness: “the responsiveness of the care-receiver to the care… By its nature, care is concerned with conditions of vulnerability and inequality… The moral precept of responsiveness requires that we remain alert to the possibilities for abuse that arise with vulnerability.” (pp. 134-135)

Elements of an Ethic of Care (Tronto)
“Care as a practice involves more than simply good intensions. It requires a deep and thoughtful knowledge of the situation, and of all of the actors’ situations, needs and competencies. To use the care ethic requires a knowledge of the context of the care process. Those who engage in a care process must make judgements: judgements about needs, conflicting needs, strategies for achieving ends, the responsiveness of care-receivers, and so forth.
[Care requires] an assessment of needs in a social and political, as well as a personal, context.” (p. 137)
Joan Tronto (1993). Moral Boundaries: A Political Argument for an Ethic of Care, Routledge (pp. 127-132)

You still work for a robotics company whose motto is: “robots for social good”!
The company has developed a robot and is ready to trial it with isolated older adults (in partnership with care providers).
What do you need to consider when preparing the trial?
Who will use the robot?
What are their care needs?

THE VIRTUAL ASSISTANT: ELLIQ
https://elliq.com/

https://elliq.com/pages/features

INTERVIEW STUDY
• 16 older adults living independently (aged 65 to 89)
• Interviews conducted in participants’ homes (pre-Covid – Jan 2019)
• Interviews focused on:
• Companionshippreferencesandsocialcircumstances
• Responses to videos of three different kinds of virtual assistant/robots: an assistant, a toy, and a pet

ELLIQ: COMPANY OR INTRUSION?
Beth (who longed for human conversation) thought ElliQ and her chatter would be comforting in a quiet and lonely household:
“It breaks the silence of the day”
Sarah (who liked human company) found the idea of ElliQ’s conversation appealing:
“like having a person in the house”
Stephanie (who shunned human company): ElliQ would be like having another person in the house – “No thanks!”
“I don’t know whether that would drive me mental if it kept interrupting me and telling me what to do… I might want to get an axe and cut it up.” (Brianna)

Who cares? Can AI care?

Who cares?
“Care seems to be the province of women… The largest tasks of caring, those of tending to children, and caring for the infirm and elderly, have been almost exclusively relegated to women” (Tronto, p. 112)
“Care is fundamental to the human condition and necessary both to survival and flourishing… In people’s everyday lives care is an essential part of how they relate to others” (Barnes, 2016, p. 1) -> everyone engages in care-giving and care-receiving.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 33

Who cares?
medical professionals,
care professionals (social work, childcare, aged care, etc.), parents, children, family
government / organisations (caring about and taking care of)
Machines / AI ?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 34

Discussion
What are some other examples of machines/AI supporting or providing care?
What are the benefits of using AI in care? What are the challenges?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 35

AI in parenting
https://www.theguardian.com/media/2022/may/01/honey-lets-track-the-kids-phone-apps-now-allow-parents-to- track-their-children

Can AI support care?
Parents using tech to monitor children’s location:
✓ Safety – can locate the child if there is something wrong: “When I think about it, it makes me feel safe, because I know that Mum or Dad knows where I am” (Lola, aged 17)
✓ Peace of mind – children “don’t answer their phone to their parents or text them back… I tend to catastrophise” (Alicia, parent)
❖Invasion of privacy? “At that point in my life, I wasn’t necessarily that happy about Mum knowing where I was all the time. I was sneaking out to smoke, so I didn’t want Mum to see that I was leaving school” (Ben)
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 37

Can AI support care?
Parents using tech to monitor children’s location:
❖Digital footprint:
“The idea that children are getting a detailed digital footprint not of their own making that tracks everywhere they go, and that’s being used to sell advertising to them now or later, is reprehensible”
(Prof Sonia Livingstone)
https://www.theguardian.com/media/2022/may/01/honey-lets-track-the-kids-phone-apps-now-allow-parents-to- track-their-children
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 38

Can AI support care?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 https://www.weforum.org/agenda/2020/10/ai-artificial- 39 intelligence-canada-homelessness-coronavirus-covid-19

– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022 https://www.weforum.org/agenda/2020/10/ai-artificial- 40 intelligence-canada-homelessness-coronavirus-covid-19

Are there any other risks involved in using AI to predict homelessness? 41
https://www.weforum.org/agenda/2020/10/ai-artificial-intelligence-canada-homelessness-coronavirus-covid-19

Tackling rough sleeping: An example

https://www.theguardian.com/cities/2014/jun/12/anti-homeless-spikes- latest-defensive-urban-architecture

Can AI support social welfare?
Social welfare = societal and government responsibility to care for vulnerable citizens Can AI be used to determine who needs financial support?
Can AI be used to determine who has received financial support in error?
Robodebt scandal: the automated process of matching the Australian Taxation Office’s income data with social welfare recipients’ reports of income to Centrelink. Many people received debt notices in error.
-> scheme criticised for inaccurate assessment, illegality, shifting the onus of proof of debt onto welfare recipients, poor support and communication, and coercive debt collection.
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022
(Braithwaite, 2020)

Can AI support self-care?
– COMP90087 – Semester 1, 2022 – © University of Melbourne 2022

CS代考 SWEN90016 Software Processes and Project Management -2- IT ALL STARTS HERE – cscodehelp代写

Continuous Integration & Continuous Deployment
GitHub Actions Case Study
Copyright University of Melbourne 2022
2022– Semester 1 Tutorial Week 11

Copyright By cscodehelp代写 加微信 cscodehelp

• Continuous Integration & Continuous Delivery
• Introduction to GitHub Actions
• Experiment with GitHub Actions
SWEN90016 Software Processes and Project Management -2- IT ALL STARTS HERE

Terminology
• Continuous integration (CI) automatically builds, tests, and integrates code changes within a shared repository; then
• Continuous delivery (CD) automatically delivers code changes to production-ready environments for approval; or
• Continuous deployment (CD) automatically deploys code changes to customers directly.
https://resources.github.com/ci-cd/?scid=7013o000002CceTAAS
SWEN90016 Software Processes and Project Management -3- IT ALL STARTS HERE

A CI/CD pipeline
https://resources.github.com/ci-cd/?scid=7013o000002CceTAAS
SWEN90016 Software Processes and Project Management -4- IT ALL STARTS HERE

Continuous delivery vs. continuous deployment
https://resources.github.com/ci-cd/?scid=7013o000002CceTAAS
SWEN90016 Software Processes and Project Management -5- IT ALL STARTS HERE

Why CI/CD?
• Development velocity
– Ongoing feedback allows developers to commit smaller changes more often, versus waiting for one release.
• Stability and reliability
– Automated, continuous testing ensures that codebases remain stable and release-ready at any time.
• Business growth
– Freed up from manual tasks, organizations can focus
resources on innovation, customer satisfaction, and
paying down technical debt.
https://resources.github.com/ci-cd/?scid=7013o000002CceTAAS
SWEN90016 Software Processes and Project Management -6- IT ALL STARTS HERE

Example CI/CD workflow
https://resources.github.com/ci-cd/?scid=7013o000002CceTAAS
SWEN90016 Software Processes and Project Management -7- IT ALL STARTS HERE

What makes CI/CD successful
• Automation
– A good CI/CD workflow automates builds, testing, and deployment so you have more time for code.
• Transparency
– Quickly assess what went wrong and why.
• Speed/Resilience
– Lead time for changes (how quickly commits are made to code in production); Time to resolution (how quickly incidents are resolved); etc.
• Security
– Virtual paper trail for auditing failures, security breaches,
non-compliance events.
https://resources.github.com/ci-cd/?scid=7013o000002CceTAAS
SWEN90016 Software Processes and Project Management -8- IT ALL STARTS HERE

GitHub Actions
• GitHub Actions gives developers the ability to automate their workflows across issues, pull requests, and more—plus native CI/CD functionality.
• Launched in 2018.
https://resources.github.com/downloads/What-is-GitHub.Actions_.Benefits-and-examples.pdf
SWEN90016 Software Processes and Project Management -9- IT ALL STARTS HERE

GitHub Actions
• Brings automation into the software development lifecycle on GitHub via event-driven triggers.
• Triggers are specified events that can range from creating a pull request to building a new brand in a repository.
• GitHub Actions automations are handled via workflows, which are YAML files.
https://resources.github.com/downloads/What-is-GitHub.Actions_.Benefits-and-examples.pdf
SWEN90016 Software Processes and Project Management -10- IT ALL STARTS HERE

GitHub Actions Workflows
• Events: Events are defined triggers that kick off a workflow.
• Jobs: Jobs are a set of steps that execute on the same runner.
• Steps: Steps are individual tasks that run commands in a job.
• Actions: An action is a command that’s executed on a runner.
• Runners: A runner is a GitHub Actions server.
https://resources.github.com/downloads/What-is-GitHub.Actions_.Benefits-and-examples.pdf
SWEN90016 Software Processes and Project Management -11- IT ALL STARTS HERE

name: Welcome
pull_request:
types: [opened, closed]
types: [opened]
jobs: run:
runs-on: ubuntu-latest steps:
– uses: with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} FIRST_ISSUE: |
👋 @{{ author }}
Thanks for opening your first issue here! Be sure to follow the issue template!
FIRST_PR: |
👋 @{{ author }}
Thanks for opening this pull request! Please check out our contributing guidelines.
FIRST_PR_MERGED: |
🎉 @{{ author }}
Congrats on merging your first pull request! We here at behaviorbot are proud of you!
https://github.com/marketplace/actions/welcome-new-users
SWEN90016 Software Processes and Project Management -12- IT ALL STARTS HERE

Try it yourself!
• Create an empty GitHub repository
• https://docs.github.com/en/actions/quickstart
– Follow the “Creating your first workflow” steps
– Follow the “Viewing your workflow results” steps
https://github.com/marketplace/actions/welcome- new-users to your repository and test it
• What other useful Actions are available on https://github.com/marketplace?type=actions?
SWEN90016 Software Processes and Project Management -13- IT ALL STARTS HERE

Thank You!
SWEN90016 Software Processes and Project Management IT ALL STARTS HERE

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 MATH3090/7039: Financial mathematics Lecture 5 – cscodehelp代写

MATH3090/7039: Financial mathematics Lecture 5

Interest rate swaps
Credit default swaps

Copyright By cscodehelp代写 加微信 cscodehelp

Interest rate swaps
Credit default swaps

A swap is a financial contract to exchange cashflow obligations or finanical exposure from one basis to another:
• Swap between fixed and floating interest rates.
• Swap between payments denominated in different currencies. • Swap or alter credit risk exposures.
Almost any set of cashflows or exposures can be swapped between market participants. We restrict ourselves to:
• Fixed-for-floating (vanilla) interest rate swaps. • Credit default swaps.

Interest rate swaps
A fixed-for-floating or plain-vanilla interest rate swap (IRS) involves: • One party borrowing at a fixed rate, but desires floating.
• Another party borrowing at a floating rate, but desires fixed.
• They are matched by a swap dealer to swap their yield exposures.

IRS: comparative advantage argument
Consider the following swap table:
Fixed Floating
Firm A borrowing cost 14.50%
Yield curve + 0.50%
Firm B borrowing cost 16.50%
Yield curve + 1.70% Net difference
Difference 2.00% 1.20% 0.80%
Firm A borrows at a lower absolute cost. But
• Firm A has a comparative advantage in borrowing fixed.
• Firm B has a comparative advantage in borrowing floating.
What if firm A desires floating exposure, and firm B fixed exposure?

IRS: comparative advantage argument
Suppose the parties enter into a fixed-for-floating swap:
• Firm A borrows fixed at 14.50% in the market.
• Firm B borrows yield curve + 1.70% in the market.
• They enter into a swap with each other, agreeing that:
◦ Firm A pays firm B yield curve + 1.70% in the swap. ◦ Firm B pays firm A fixed 16.25% in the swap.

IRS: comparative advantage argument
Both firms win:
• Firm A net gain is 0.55% per annum:
◦ Pays 14.50% and receives 16.25% fixed: 1.75% gain.
◦ Pays yield curve + 1.70% instead of + 0.50%: 1.20% loss.
• Firm B net gain is 0.25%:
◦ Pays 16.25% fixed instead of 16.50%.

IRS: mechanics
• Interest payments are calculated based on a notional principal. ◦ Fixed PMTs = fixed rate 16.25% × notional.
◦ Floating PMTs = (1-period forward rates + 1.7%) ×
• Net interest is paid in arrears as the fixed payment minus the floating payment based on the spot rate at the start of each period.
• Floating rate is based on a reference yield curve: BBSW or LIBOR.
• The fixed rate is called the swap rate.

IRS: pricing
• We calculate what’s called the swap value.
• Pricing: Setting the swap rate so the swap value equals zero. • Application of both DCF and arbitrage/replication principals:
◦ Price off an arbitrage-free yield curve.
◦ Swap value is present value of future net cashflows.
An example will best illustrate.

IRS: example
• Construct the 1-period forward yield curve from the zero one. • Suppose semiannual compounding over 4 years.
1 y0,1 = 0.1264
2 y0,2 = 0.1289
3 y0,3 = 0.1316
4 y0,4 = 0.1349
5 y0,5 = 0.1372
6 y0,6 = 0.1410
7 y0,7 = 0.1448
8 y0,8 = 0.1480
1-period forward yields y0,1 = 0.1264
y1,2 = 0.1314
y2,3 = 0.1370
y3,4 = 0.1448
y4,5 = 0.1464
y5,6 = 0.1601
y6,7 = 0.1677
y7,8 = 0.1705
Spot yields

IRS: example (cont.)
• Firm A pays 1-period forward + 1.70%, Firm B pays fixed 16.25%.
• Notional principal is $1, 000, 000, 4 years, semiannual payments.
Spot Forward
1 0.1264 0.1264
2 0.1289 0.1314
3 0.1316 0.1370
4 0.1349 0.1448
5 0.1372 0.1464
6 0.1410 0.1607
7 0.1448 0.1677
8 0.1480 0.1705
Fix PMT 81,250 81,250 81,250 81,250 81,250 81,250 81,250 81,250
Float PMT 71,700 74,201 77,005 80,915 81,712 88,551 92,371 93,767 Swap
Fix-Float 9,550 7,049 4,245 335 -462 -7,301 -11,121 -12,517 value
PV @ Spot 8,982 6,221 3,506 258 -332 -4,851 -6,818 -7,071 -104.59

IRS: example (cont.)
In the calculated example,
Swap value: PV Fixed PMTs – PV float PMTs = -$104.59.
• Firm A pays floating / receives fixed: Swap has negative value.
• Firm B pays fixed / receives floating: Swap has positive value. Pricing a swap: involves setting the swap and floating rates to ensure:
• The swap value is initially equal to zero.
• It is mutually beneficial for both parties to enter into a swap. Given the floating rate, the correct fixed rate is ≈ 16.2535%.

Interest rate swaps
Credit default swaps

Credit default swaps (CDS)
A credit default swap is essentially an insurance contract in which: • The buyer pays regular premiums to the seller which are
calculated from the credit default rate.
• The buyer receives a (insurance) payout from the seller upon the
occurance of a credit event.
Credit default swaps are often called default insurance contracts.
• We first develop some terminology and notation. • We then turn to valuing or pricing CDSs.

CDS: terminology
• Reference entity: The entity over which the CDS is written.
• Reference asset: The specific asset over which the CDS is written. Example
• Eg ANZ Bank (buyer) gets a prespecified payout from a party (seller) in the event that BHP (reference entity) defaults on an interest payment on a large loan (reference asset) it has with ANZ.
Hence, in this case ANZ is insuring against the possibility of BHP defaulting on an interest payment on a loan that BHP has with ANZ.

CDS: terminology (cont.)
• Credit event: Events upon whose occurance a payout is made: ◦ Hard: Default on interest or loan payments, principal, etc. ◦ Soft: Corporate restructuring, credit rating downgrade,
corporate takeover or merger, asset writeoff, credit deterioration, etc.
• Protection buyer: The buyer in the CDS, who pays a regular premium in exchange for receiving a payout from a credit event.
• Protection seller: The seller in the CDS, who agrees to make the payout for a credit event in return for receiving the premium.

CDS: terminology (cont.)
• Notional principal or amount: Underlying ‘value’ of the CDS.
• Premium payments: The regular payment made by the buyer.
• Premium payments = credit default rate × notional principal.
• Probability of default: The probability that a credit event occurs. • Recovery rate: The percent amount recovered upon default.
• Loss given default: Dollar amount lost upon default.
• Payout: Payout made by the seller in the event of a credit event.

CDS: notation
• T is the time in years to maturity of the CDS.
• 0 = t0,t1,…,tN−1,tN = T is a set of dates.
• y0,1, y0,2, . . . , y0,N−1, y0,N is the zero coupon bond yield curve. • F is the notional principal.
• r is the credit default rate.
• Cn is the cashflow to be paid on the reference asset at time tn. • pn is the probability of default on the cashflow Cn.
• Rn is the recovery rate on cashflow Cn in the case of default.

CDS: assumptions
Assume only hard credit events: Default on the cashflows. In the case of default on cashflow Cn we set:
• The amount recovered at time tn is
recovery rate × Cn = RnCn.
• The dollar payout made at time tn is
payout = Cn − amount recovered = (1 − recovery rate)Cn
= (1 − Rn)Cn.
• The dollar payout is actually the loss given default.
We also assume the swaps no longer exists after a default event.

CDS: pricing
Similar to IR swaps:
• We calculate what’s called the credit default swap value.
• Pricing: Set the credit default rate r so the swap value equals zero.
• Application of both DCF and arbitrage/replication principals:
◦ We price off an arbitrage-free yield curve and the swap value
is the present value of the swap’s future cashflows. ◦ Priced from the perspective of the buyer.
A simple example will best illustrate.

CDS: example
• The buyer wants to insure against default on a coupon-paying
bond with principal F and annual coupon rate c.
• T = 3 years and we are given a zero coupon bond yield curve
y0,1 , y0,2 , and y0,3 .
• Both probability of default p and recovery rate R are constant.

CDS: example (cont.)
There is only four possible outcomes over the life of the swap: (i) A default occurs at the end of year 1 and the swap expires. (ii) A default occurs at the end of year 2 and the swap expires.
(iii) A default occurs at the end of year 3 (at maturity). (iv) No default occurs.
The swap value equals the sum of the present value of each of these outcomes multiplied by their probabilities of occuring.

CDS: example (cont.)
Time t = 1 year:
• Cashflows upon default with probability p:
◦ Payout (1 − R)C is received, no premium paid. • Cashflows upon no default with probability 1 − p:
◦ Premium rF is paid.
Default: Swap expires and cashflow is p(1 − R)C in year 1.
No default: Swap survives so continue to work out its future cashflows.

CDS: example (cont.)
Time t = 2 years: Probability of (1 − p) of getting to t2. • Cashflows upon default with probability (1 − p)p:
◦ Payout (1 − R)C is received, no premium paid. • Cashflows upon no default with probability (1 − p)2:
◦ Premium rF is paid.
Default: Swap expires and cashflows are
−rF in year 1, p(1−R)C in year 2.
No default: Swap survives so continue to work out its future cashflows.

CDS: example (cont.)
Time t = 3 years = maturity: Probability of (1 − p)2 of getting to t3. • Cashflows upon default with probability (1 − p)2p:
◦ Payout (1 − R)(C + F ) is received, no premium paid. • Cashflows upon no default with probability (1 − p)3:
◦ Premium rF is paid.
Default: Swap matures and cashflows are
−rF in year 1, −rF in year 2, (1−R)(C +F) in year 3. No default: Swap matures and cashflows are −rF each year.

CDS: example (cont.)
rF 1+y0,1 − rF
(1+y0,2 )2
+ (1−R)(C+F) (1+y0,3 )3
1+y0,1 − rF
(1+y0,2 )2
Event Default t1 Default t2 Default t3 No Default
Swap Cashflows (1 − R)C −rF, (1−R)C
−rF, −rF, (1−R)(C +F) − −rF, −rF, −rF
PV Swap Cashflows
(1−R)C 1+y0,1
− rF + (1−R)C
Probability
p (1−p)p (1−p)2p (1−p)3
rF (1+y0,2 )2
rF (1+y0,3 )3
􏰈(1−R)C􏰉 1 + y0,1
2 􏰈 +(1−p) p −
swap value = p
(1−R)C 􏰉 (1 + y0,2)2
(1 + y0,2)2
(1−R)(C+F)􏰉 (1 + y0,3)3
1 + y0,1 (1 + y0,2)2 (1 + y0,3)3
+(1−p)− − − .

CDS: pricing
The swap value equals the sum of the present value of each of possible outcome multiplied by their probabilities of occuring.

CDS: example with numbers
• Buyer wants to insure against default on a coupon-paying bond
with principal F = 100 and annual coupon rate of c = 5%.
• T = 3 years and we are given a zero coupon bond yield curve
y0,1 = 3%, y0,2 = 4%, and y0,3 = 5%.
• The probability of default is constant at p = 25% and the recovery rate is constant at R = 60%. The swap rate is r = 3.68%

CDS: example with numbers (cont.)
Swap Cashflows
−3.68, 2 −3.68, −3.68, 42 −3.68, −3.68, −3.68 􏰈2􏰉
PV Swap Cashflows
Probability 0.25 (0.75)0.25 (0.75)2 0.25 (0.75)3
Event Default t1 Default t2 Default t3 No Default
swap value = 0.25
+ (0.75)0.25
−3.68 + 1.03
− 3.68 − 3.68 + 42 1.03 1.042 1.053 − 3.68 − 3.68 − 3.68
1.042 1.053
2 􏰉 − 1.03 + 1.042
2 􏰈 3.68 3.68 42 􏰉
+ (0.75) 0.25 − 1.03 − 1.042 + 1.053
3􏰈 3.68 3.68 3.68 􏰉
+ (0.75) − 1.03 − 1.042 − 1.053 = −0.0004 ≈ 0.

Interest rate swaps
Credit default swaps

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

程序代写 #include – cscodehelp代写

#include
#include

thread_func (void *ptr)

Copyright By cscodehelp代写 加微信 cscodehelp

printf (“%s
”, (char *) ptr);

pthread_t t1, t2;
char *msg1 = “Thread 1”;
char *msg2 = “Thread 2”;
int t1_ret, t2_ret;

t1_ret = pthread_create (&t1, NULL, (void *) &thread_func, (void *) msg1);
t2_ret = pthread_create (&t2, NULL, (void *) &thread_func, (void *) msg2);

pthread_join (t1, NULL);
pthread_join (t2, NULL);

printf (“Threads finished with %d/%d codes
”, t1_ret, t2_ret);

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 Real Time Embedded Systems Introduction – cscodehelp代写

Real Time Embedded Systems Introduction
Dr. AlexBystrov Dr.
School of Engineering University of Newcastle upon Tyne

Copyright By cscodehelp代写 加微信 cscodehelp

1. Hardware/software design and modelling of embedded computing systems
2. Protocols, design concepts and scheduling
3. Experience in programming of real-time systems

The formal lectures cover a set of hardware and software aspects
◮ Real-time aspect: Petri net model, concurrency, arbitration, communication modelling, ACM
◮ Embedded design: threads, processes, scheduling, programming, setting up communication links between processes.
◮ ACM and architectures with Dr. develop skills and help to understand deeper the theory.
◮ Software design – scheduler, real-time, ARM platform

The formal lectures cover a set of hardware and software aspects
◮ Real-time aspect: Petri net model, concurrency, arbitration, communication modelling, ACM
◮ Embedded design: threads, processes, scheduling, programming, setting up communication links between processes.
◮ ACM and architectures with Dr. develop skills and help to understand deeper the theory.
◮ Software design – scheduler, real-time, ARM platform

Assessment
◮ no exams this year
◮ 40% presentation
◮ 60% written report 3000 words

Deadline: the last Friday of the semester

What is an embedded system?
Definition: “nearly any computing unit not used as a desktop computer” (Vahid & Givargis 2002)
◮ What about laptops, servers and mainframes?
◮ What if the desktop is used to control something, to route the
network or just for Skype?
A better definition (Gupta 2002):
◮ employs a combination of hardware+software to perform a specific function;
◮ is a part of a larger system that may not be a “computer”;
◮ works in a reactive and time-constrained environment.

General characteristics (Williams 2006)
◮ Latency limit
◮ Event-driven scheduling
◮ Time-driven scheduling
◮ Low-level programming
◮ SW tightly coupled to special HW
◮ Dedicated specialised functions
◮ Computer inside a control loop
◮ Multi-tasking
◮ Continuous running
◮ Various specific metrics: safety-critical app., security, power constraints, variability-tolerant design, etc.

Software/hardware co-design
◮ Software provides features and flexibility;
◮ Hardware (processors, ASICs, memory, accelerators, etc.) is
used for performance, fault-tolerance and security.
Example of DSP:
◮ DSP code
◮ processor cores
◮ application-specific gates ◮ analogue I/O
[Gupta 2002]

Software/hardware co-design
◮ Software provides features and flexibility;
◮ Hardware (processors, ASICs, memory, accelerators, etc.) is
used for performance, fault-tolerance and security. Example of DSP:
◮ DSP code
◮ processor cores
◮ application-specific gates ◮ analogue I/O
[Gupta 2002]

Common design metrics
Unit cost:
Total cost:
fabrication cost of a single copy of the system, excluding NRE cost
(Non-Recurring Engineering cost) one-time monetary cost of designing the system
NRE_cost + Unit_cost * Number_of_units
Per product cost: NRE_cost/Number_of_units + Unit_cost
Time-to-prototype: time to develop a working version
Time-to-market: time to develop a system to the point when it can
be released and sold to customers
Flexibility: the ability to change the functionality of the system without incurring heavy NRE costs
Physical metrics

Time-to-market metric (simplified model)

NRE and Unit Cost design metrics

Design productivity gap (HW)
Moore’s law…

The gap is bigger than it seems…
◮ “The Mythical Man-Month” by 1975
◮ Scaling up the group “hits the wall” at some point
◮ After that point adding new men does NOT increase overall productivity
◮ Individual productivity goes down

Conclusions
◮ Embedded systems, including real-time, are everywhere and represent all electronic applications
◮ Everything above is an Introduction – the really important and complex stuff will follow
◮ Attendance is important
◮ Finding the literature sources is extremely important
◮ This is an advanced course – please help us to analyse your needs as a learner, so we could support you to a maximum extent
◮ Technical content of the introduction covered a definition of an embedded systems and metrics

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

程序代写 COMP20008 Elements of Data Processing – cscodehelp代写

Experimental design for supervised learning — Introduction
School of Computing and Information Systems
@University of Melbourne 2022

Copyright By cscodehelp代写 加微信 cscodehelp

Supervised vs Unsupervised Learning
Supervised
Classification Regression
Unsupervised
Clustering
Association (Recommendation)
Dimensionality reduction: Feature selection & feature projection
Others: Reinforcement Learning, Transfer learning, etc.
COMP20008 Elements of Data Processing

Experimental Design (supervised)
• Evaluation methods • Performance metrics • Feature selection
COMP20008 Elements of Data Processing

Evaluation methods for supervised learning
School of Computing and Information Systems
@University of Melbourne 2022

Experimental Design (supervised)
• Evaluation methods • Performance metrics • Feature selection
COMP20008 Elements of Data Processing

The generalisation challenge
– Generalises well
COMP20008 Elements of Data Processing
– Overfits the data
– Low Bias error
– High Variance error

The generalisation challenge – cont.
When a model learns too much from the training data: it has
– Low bias error
• Predicts well on training data – But high variance error
• Predictions change widely given different training data.
COMP20008 Elements of Data Processing

Evaluation method – training and test sets
How do we know if our model will do well on unseen data? • We train the model on a set of data – the training set.
• We evaluate the model on a new set of data – the test set.
• Assumptions:
• Training and test sets are from the same distribution
• Samples are drawn independently and identically i.i.d. at random.
• Only one set? Partition the set into training and test set.
COMP20008 Elements of Data Processing

Training – validation – test split
• A separate test set for model selection – the validation set to prevent data leakage.
Training Test
Test (hold out)
• Repeat to select the best model (or hyperparameters) • Train the model on the training set.
• Select the model that performs best on the validation set.
• Use Training + Validation sets to train the selected model.
• Report performance on the independent test set (hold out set).
COMP20008 Elements of Data Processing

Evaluation method – Cross validation
• Statistical uncertainty – the problem when the dataset is small.
• We cannot be confident on the performance estimation.
• Cross validation – uses all data to estimate model performance Training Test
COMP20008 Elements of Data Processing

K-fold cross validation
Partition training data randomly into ! blocks
Repeat to select the best hyper-parameters or model – Repeat ! times:
• ! − 1 blocks for training
• 1 block for evaluation – Average the ! scores
Training set
Test (validate)
Test (validate)
Test (Validate)
COMP20008 Elements of Data Processing

Leave one out cross validation
Leave one out cross validation (LOOCV)
• n-fold cross validation ( ! = %)
• The validation set is exactly one observation. • More expensive to run.
Test/Validate Train
Training set
COMP20008 Elements of Data Processing

Repeated k-fold cross validation
• Repeat k-fold cross validation r times, for example 5 times, to reduce any bias from random number choices
Repeat r times:
i. Partition dataset randomly into k blocks ii. Repeat k times:
• k-1 blocks for training and 1 block for testing • Record 1 performance score
Average the !×* scores
COMP20008 Elements of Data Processing

Evaluation method – Bootstrap validation
• Bootstrap – A man pulling himself up and over a fence by pulling upwards on his own bootstraps.
• Relying on smaller samples of the population itself in order to draw conclusions on the population.
• Bootstrap sample – each smaller sample is drawn randomly from the original dataset using replacement.
• Out of bag data (OOB): remaining data points NOT in the bootstrap sample; these are the test (validation) data.
COMP20008 Elements of Data Processing

Bootstrap validation (bootstrapping) – cont.
Draw + bootstrap samples (from training data)
Repeat to select the best hyper-parameters or model – Repeat ! times, for each bootstrap sample:
• Train the model on the bootstrap sample,
• Evaluate the performance on the OOB (test/validation data)
– Report the mean and standard deviation of the ! performance scores
• Can handle imbalanced data sets – use stratified bootstrap where biased sampling is used.
COMP20008 Elements of Data Processing

The overall flow
How do we know if decision tree is better than knn for the dataset?
1. Model evaluation and selection using one of the following:
a. Hold-out: Training – validation splits
b. Cross validation: k– fold Leave one out, repeated CV
c. Bootstrapping
After step 1, you have selected the final algorithm and hyper parameters:
2. Fit the selected model and hyper-parameters with the entire training set
3. Report performance on the independent test set.
COMP20008 Elements of Data Processing

Performance metrics
Regression and Classification
COMP20008 Elements of Data Processing

Classification metrics

Confusion Matrix
The outcomes of the classification can be summarised in a Confusion Matrix (contingency table)
• Actual class: {yes, no, yes, yes, …} • Predicted class: {no, yes, yes, no, …}
TP: true positive FN: false negative FP: false positive TN: true negative
PREDICTED CLASS
ACTUAL CLASS
COMP20008 Elements of Data Processing

Classification metric – Accuracy
How many observations are correctly classified out of ! observations
“##$%&#’ =
• ! is the total number of observations. ! = #*+ + #.- + #.+ + #*-
• !”#”$ = 0.96 !”#%#$#”$
#*+ + #*- !
PREDICTED CLASS
ACTUAL CLASS
COMP20008 Elements of Data Processing

• Accuracy is misleading in imbalanced problems. • &’#( = 0.97
• The predictions for the minority class are completely wrong but the overall accuracy value is high.
PREDICTED CLASS
ACTUAL CLASS
COMP20008 Elements of Data Processing

Classification metric – Recall
#*+ #*+ + #.-
• !” ≈0.94 !”#%
• Use recall when you do not want FN (detecting as many malicious programs as possible)
• Recall – Effectiveness of a classifier to identify class labels
PREDICTED CLASS
ACTUAL CLASS
COMP20008 Elements of Data Processing

Classification metric – Precision
• Precision – Agreement of the true class labels with those of the
classifier’s
• !” ≈0.98 !”#$
+%5#9:9;! =
#*+ #*+ + #.+
PREDICTED CLASS
ACTUAL CLASS
• Use precision when you DON’T want FP (avoid putting innocent people in prison)
COMP20008 Elements of Data Processing

Classification metric – F1
• F1 – The harmonic mean between precision and recall. .1 = 2× +%5#9:9;! × 45#&66
+%5#9:9;! + 45#&66
PREDICTED CLASS
ACTUAL CLASS
• 2× (.&*×(.&! ≈ 0.96 (.&*#(.&!
COMP20008 Elements of Data Processing

Multi-class Confusion matrix
• What about 3 classes? Calculate TP, FP, TN, FN for each class.
PREDICTED CLASS
ACTUAL CLASS
• Sunny: #TP: c1
#FN: c2 +c3
#FP: c4 +c7
#TN: c5 +c6+c8 +c9
#FN: c4 +c6
#FP: c2 +c8
#TN: c1 +c3+c7 +c9
Overcast: #TP: c9
#FN: c7 +c8
#FP: c3 +c6
#TN: c1 +c2+c4 +c5
COMP20008 Elements of Data Processing

Multi-class Accuracy
• Subset accuracy • Average accuracy
∑$ ##$! !”#
(##$ +##) ) ∑$ ! !&
• ! is total number of observations, k is the number of classes
COMP20008 Elements of Data Processing

Multi-class Accuracy – cont.
• Subset accuracy = !”# )
*+,-.,** #++
PREDICTED CLASS
ACTUAL CLASS
COMP20008 Elements of Data Processing

Multi-class Accuracy – cont.
#&’ (#&) / !!*
• Average accuracy =
Sunny: 94/100 TP: 30
TN: 28+2+1+33
Rain: 95/100 TP: 28
TN: 30+3+1+33
Overcast: 93/100 TP: 33
TN: 30+0+2+28
PREDICTED CLASS
ACTUAL CLASS
Average accuracy :
(0.94 + 0.95 + 0.93) = 0.94
COMP20008 Elements of Data Processing

Multi-class metrics – cont.
• Recall, Precision, and F1
• Involves averaging over multiple classes.
• Macro and Micro averaging options.
• We do not go into these in this subject, but it is good to know that they exist.
COMP20008 Elements of Data Processing

Regression metrics

Linear Regression – revision
Yi=β0 +β1Xi+εi
Error term or the residual
ε = Y − Y&
for this X value i
COMP20008 Elements of Data Processing

Regression metrics
• MSE – Mean Square Error
• Lower is better
D E F = 1 G . ( H − HJ ) / – ,-$ , ,
EEF=G. (H−HJ)/ ,-$ , ,
COMP20008 Elements of Data Processing

Regression metrics – cont.
• RMSE – Root Mean Square Error
Most used measure Lower value is better
∑. (H−HJ)/ ,-$ , ,
COMP20008 Elements of Data Processing

Regression metrics – cont.
• MAE –Mean Absolute Error 1 . J D ” F = – G , – $ H, − H,
Lower value is better
COMP20008 Elements of Data Processing

Regression metrics and outliers
• MAE and RMSE are in the same scale of the residual and MSE is in quadratic scale of the residual
• Which is sensitive to outliers?
COMP20008 Elements of Data Processing

• Others (MAPE, Median Absolute Error)
• https://scikit-learn.org/stable/modules/model_evaluation.html#
COMP20008 Elements of Data Processing

Feature selection – univariate
Supervised Learning
COMP20008 Elements of Data Processing

Feature selection – univariate
Intuition: evaluate “goodness” of each feature
• Consider each feature separately: linear time in number of
attributes
• Typically most popular and simple strategy
COMP20008 Elements of Data Processing

Feature selection for classification
What makes a single feature good?
• Well correlated with class • Not independent of class
Which of !!, !” is a good feature for predicting the class “?
COMP20008 Elements of Data Processing

Feature selection – Mutual information (MI)
What makes a feature good? If it is well correlated with the class • Mutual Information (revision)
!”#,% =’% −’%#
• Is feature ) well correlated with the class *? !”),* =’* −’*)
• High !” ), * : ) strongly predicts *; select ) into the feature set
• Low !” ), * : ) can not predict *; ) is not selected into the feature set
COMP20008 Elements of Data Processing

Feature selection – MI – cont.
Is )! well correlated with the class *? !”)!,* =’* −’*)!
= 1 − 0 = 1 (High MI, Yes!)
• The feature a1 perfectly predicts c; select a1 as a feature.
Is )” well correlated with the class *? !”)”,* =’* −’*)”
= 1 − 1 = 0 (Low MI, No!)
• The feature a2 can not predict c at all; a2 is Not selected as a feature.
COMP20008 Elements of Data Processing

Feature selection – Chi-square c! test What makes a single feature good?
If it is not independent of the class Chi-square c” test
COMP20008 Elements of Data Processing

Chi-square c! – Null Hypothesis
H0: two variables are independent: # $ ∩ & = # $ ×# &
• A statistical hypothesis test for quantifying the independence of pairs of nominal or ordinal variables.
• Takes into account sample size & has a significance test (MI does not)

Chi-square c! test – cont.
)#: a feature and the class variable are independent
Is there a difference between observed and expected frequencies?
1. Summarise observed frequencies of the feature and the class
2. Find expected frequencies for all (feature, class) pairs
3. Calculate the c” statistic: difference between observed and expected values
4. Look up c” critical value to test )#
Reject !!; feature and class variables are not independent.
COMP20008 Elements of Data Processing

Chi-square c! test – example Chi-square c” test for !! and “:
1. Summarise observed frequencies of !! and ” in a contingency table
a1= Y a1= N
COMP20008 Elements of Data Processing

Chi-square c! test – example Chi-square c” test for !! and “:
Find expected frequencies for all (feature, class) pairs
• Probability” #” =%∩’=% = ” #” =% ×” ‘=% = #$×#$ = “$
•ExpectedfrequencyE#”=%,’=% =*×”#”=%∩’=% =4×”$=1
a1= Y a1= N
COMP20008 Elements of Data Processing

Chi-square c! test – example
Calculate the c” statistic (difference between observed and
expected frequencies)
-%,’ − .%,’ %({*+,, *+-} %({/!+,, /!+-}
/# #”,’ = 0 0
a1= N .%,’ a1= Y 1
/##”,’= #0″”+!0″”+!0″”+#0″” =4 “”””
COMP20008 Elements of Data Processing

Chi-square c! test – example
4. Look up c! critical value and test !” • With 95% confidence (3 level = 0.05)
• Degreesoffreedomdf= 2−1 × 2−1 =1(#1 has2values,’has2values) • The c# critical value is 3.84 (lookup this value)
COMP20008 Elements of Data Processing

Chi-square c! test – example
4. Look up c! critical value to test !” • The c” statistic is 4
• The c” critical value is 3.84
(*+=1, , = 0.05)
• The c” statistic > critical value • The p-value < 0.05 • Reject )# • !1 is NOT independent of c • !1 is selected into the feature set • p-value > 0.05
• Fail to reject 1!
(3.84, 0.05)
• p-value < 0.05 • Reject 1! COMP20008 Elements of Data Processing Degrees of freedom • Maximum number of independent values that have freedom to vary in the data sample. If a feature can have k different possible values • Why is the degree of freedom k – 1? • Because the frequency of the last value is totally determined by those of the k – 1 values; the frequency of the last value is not free to vary. Weather forecasts have 3 possible values: sunny, rain, or overcast. Given * samples, if 7" samples are sunny, and 7# samples are rain, the number of samples that are overcast must be * − 7" − 7#. COMP20008 Elements of Data Processing Univariate feature selection – potential issues Difficult to control for inter-dependence of features • Feature filtering of single features may remove important features. For example, where the class is the XOR of some features: • Given all the features, the class is totally predictable. • Given one of them, the MI or chi-square statistic is 0. • In practice, feature extraction is also used, i.e. construct new features out of existing ones, e.g. ratio / difference between features Income Expenditure i>e Credit
120 100 1 Good
50 30 1 Good
50 70 0 Bad
200 40 1 Good
200 210 0 Bad
…………
160 150 1 Good
COMP20008 Elements of Data Processing

Feature selection for regression
1. Mutual Information
COMP20008 Elements of Data Processing

Model evaluation with feature selection
• Feature selection should be done within each training step. K-fold cross validation procedure
Partition training data randomly into ! blocks
Repeat to select the best hyper-parameters or model – Repeat ! times:
• Feature selection on the ! − 1 blocks,
• ! − 1 blocks for training,
• 1 block for evaluation
– Average the ! scores
COMP20008 Elements of Data Processing

Model evaluation with feature selection
• Feature selection should be done within each training step. Bootstrap validation procedure
Draw $ bootstrap samples
Repeat to select the best hyper-parameters or model
– Repeat $ times, for each bootstrap sample:
• Feature selection on the 899:;:<#= • Train the model on the bootstrap sample, • Evaluate the performance on the OOB (test/validation data) - Report the mean and standard deviation of the $ performance scores COMP20008 Elements of Data Processing The overall flow with feature selection 1. Model evaluation and selection with feature selection 2. Apply feature selection and fit the selected model and hyper- parameters with the entire training set 3. Report performance on the independent test set. COMP20008 Elements of Data Processing 程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 Ve492: Introduction to Artificial Intelligence – cscodehelp代写

Ve492: Introduction to Artificial Intelligence
Knowledge-based Agent and Propositional Logic

UM-SJTU Joint Institute Slides adapted from AIMA, UM, CMU

Copyright By cscodehelp代写 加微信 cscodehelp

Learning Objectives
❖ What is knowledge-based agent?
❖ What is a logical agent?
❖ What is propositional logic?
❖ How to perform inference in propositional logic?

❖ Knowledge-based Agents
❖ Propositional Language
❖ Inference Algorithms

Knowledge-based Agent
Knowle?dge Base Inference
Environment

Problem Type
❖ Environment Type:
❖ Partially-observable
❖ Single agent
❖ Deterministic
❖ Discrete
❖ Sequential (AIMA)
❖ Known model
❖ Planning agent instead of reflex agent
❖ Can derive new facts from what it currently knows

Performance
❖ pick up gold = +1000,
❖ get eaten or fall in pit = -100
❖ step = -1
❖ shoot = -10
Environment
❖ move forward,
❖ turn left or right,
❖ pick up,
❖ Stench, ❖ Breeze, ❖ Glitter, ❖ Bump, ❖ Scream
Wumpus World
http://thiagodnf.github.io/wumpus-world-simulator/

High-Level Implementation
function KB-AGENT(percept) returns an action persistent: KB, a knowledge base
t, an integer, initially 0 TELL(KB, PROCESS-PERCEPT(percept, t))
action ← ASK(KB, PROCESS-QUERY(t)) TELL(KB, PROCESS-RESULT(action, t)) t←t+1
return action

So what do we TELL our knowledge base (KB)?
❖ Facts (sentences)
❖ The grass is green
❖ The sky is blue ❖ Rules (sentences)
❖ Eating too much candy makes you sick
❖ When you’re sick you don’t go to school ❖ Percepts and Actions (sentences)
❖ Pat ate too much candy today
What happens when we ASK the agent?
❖ Inference – new sentences created from old ❖ Pat is not going to school today
What language to use?

Formal Language
❖ Syntax: What sentences are allowed?
❖ Semantics:
❖ What are the possible worlds?
❖ Which sentences are true in which worlds? (i.e., definition of truth)
❖ Model theory: how do we define whether a statement is true or not? ❖ Truth and entailment
❖ Proof theory: what conclusion can we draw given a state of partial knowledge? ❖ Soundness and completeness

❖ Knowledge base = set of sentences in a formal language
❖ Declarative approach to building an agent (or other system):
❖ Tell it what it needs to know (or have it Learn the knowledge)
❖ Then it can Ask itself what to do—answers should follow from the KB
❖ Agents can be viewed at the knowledge level
i.e., what they know, regardless of how implemented
❖ A single inference algorithm can answer any answerable question
❖ Cf. a search algorithm answers only “how to get from A to B” questions

Logic Language
❖ Propositional logic
❖ Syntax:PÚ(¬QÙR); XÛ(RÞS)
❖ Possible model: {P=true, Q=true, R=false, S=true, X=true} or 11011
❖ Possible world: interpretations of symbols
❖ Semantics:aÙbistrueinaworldiffaistrueandbistrue(etc.)
❖ First-order logic
❖ Syntax: “x $y P(x,y) Ù ¬Q(Joe,f(x)) Þ f(x)=f(y)
❖ Possible model: Objects o1, o2, o3; P holds for ; Q holds for ; f(o1)=o1; Joe=o3; etc.
❖ Possible world: interpretations of objects, predicates, and functions.
❖ Semantics: f(s) is true in a world if s=oj and f holds for oj; etc.

Spectrum of Representations
Search, game-playing MDP, RL
CSPs, planning, propositional logic, Bayes nets, neural nets, RL with function approx.
First-order logic, databases, probabilistic programs

Propositional Logic

Propositional Logic
❖ Variable that can be true or false
❖ Written in capital letters, e.g. A, B, P1,2
❖ Often include True and False
❖ Operators:
❖ A Ù B: A and B (conjunction)
❖ A Ú B: A or B (disjunction) Note: this is not an “exclusive or”
❖ A Þ B: A implies B (implication). If A then B
❖ A Û B: A if and only if B (biconditional)
❖ Sentences

Syntax of Propositional Logic
❖ Given: a set of proposition symbols {X1, X2, …, Xn}
Sentence → AtomicSentence | ComplexSentence AtomicSentence → True | False | Symbol Symbol→X1 |X2 |…|Xn
ComplexSentence → ¬Sentence
| (Sentence Ù Sentence)
| (Sentence Ú Sentence) | (Sentence Þ Sentence) | (Sentence Û Sentence)

Example: Wumpus World
Logical Reasoning
❖ Bij = breeze felt
❖ Sij = stench smelt
❖ Pij = pit here
❖ Wij = Wumpus here
❖ Gij = gold
http://thiagodnf.github.io/wumpus-world-simulator/

Wumpus World: Tell KB
❖ There is no pit in [1, 1]: ❖ R1: ¬𝑃!,!
❖ A square is breezy iff there is a pit in a neighboring square: ❖ R2:𝐵!,!⇔(𝑃!,#∨𝑃#,!)
❖ R3: 𝐵#,! ⇔ (𝑃!,! ∨ 𝑃#,# ∨ 𝑃$,!) ❖…
❖ The first two percepts:
❖ R4: ¬𝐵!,!
❖ R5: 𝐵!,#

Truth from Semantics
❖ A model specifies the truth value of every proposition symbol (e.g., 𝑃, ¬𝑃, True, False)
❖ The truth value of complex sentences is defined in terms of the truth values of its elements:
❖ ¬𝑃,𝑃∧𝑄,𝑃∨𝑄,𝑃⇒𝑄,𝑃⇔𝑄

Truth Tables
𝛂 ∨ 𝛃 is inclusive or, not exclusive
FFFF FTFT TFTF TTTT

Truth Tables
𝛂⇒𝛃 isequivalentto ¬𝛂∨𝛃
FFT FTT TFF TTF

Truth Tables
𝛂 ⇔ 𝛃 is equivalent to (𝛂 ⇒ 𝛃) ∧ (𝛃 ⇒ 𝛂)
𝛂 𝛃 𝛂⇒𝛃𝛃⇒𝛂
FF TT FT TF TF FT TT TT
(𝛂⇒𝛃) ∧ (𝛃⇒𝛂)

Semantics of Propositional Logic
Using a given model, check if sentence is true
In other words, does the model satisfy the sentence?

Logical Consequences
❖ Problem: When can we conclude b logically follows from a?
❖ Entailment: determines relation between sentences based on
semantics (from outside)
❖ Inference: generates new sentence from current KB (from inside)
❖ Two closely related, but very different, concepts
❖ How are they related?
❖ How can we perform entailment and inference?

Entailment
Entailment: a ⊨ b (“a entails b” or “b follows from a”) iff in every world where a is true, b is also true
❖ I.e., the a-worlds are a subset of the b-worlds [models(a) Í models(b)] Usually we want to know if KB ⊨ query
❖ models(KB) Í models(query) ❖ In other words
❖ KB removes all impossible models (any model where KB is false)
❖ If query is true in all of these remaining models, we conclude that query
must be true
Entailment and implication are very much related
❖ Howisa⊨banda⟹brelated?
❖ Entailment relates two sentences, while an implication is itself a sentence

Validity and Satisfiability
❖ A sentence is valid if it is true in every model ❖ 𝛼entails𝛽ifandonlyif𝛼⇒𝛽isvalid
❖ A valid sentence is also called tautology
❖ A sentence is satisfiable if it is true in some model
❖ A sentence is unsatisfiable if it is true in no model
❖ Satifiability Problem
❖ For a given sentence 𝛼, find assignments to its symbols that makes 𝛼 true
❖ Instance of CSP
❖ Efficient solver: DPLL (AIMA, Chapter. 7.6.1)

Wumpus World: Model
❖ Possible worlds/models ❖ P1,2 P2,2 P3,1

Wumpus World: KB
❖ Possible worlds/models
❖ P1,2 P2,2 P3,1
❖ Knowledge base
❖ Nothing in [1,1]
❖ Breeze in [2,1]

Wumpus World: Query 1
❖ Possible worlds/models
❖ P1,2 P2,2 P3,1
❖ Knowledge base
❖ Nothing in [1,1]
❖ Breeze in [2,1]
❖ Query 𝛼!:
❖ No pit in [1,2]

Wumpus World: Query 2
❖ Possible worlds/models
❖ P1,2 P2,2 P3,1
❖ Knowledge base
❖ Nothing in [1,1]
❖ Breeze in [2,1]
❖ Query 𝛼”:
❖ No pit in [2,2]

Quiz: Wumpus World
❖ Possible worlds/models
❖ P1,2 P2,2 P3,1
❖ Knowledge base
❖ Nothing in [1,1]
❖ Breeze in [2,1]
❖ Query 𝛼#:
❖ No pit in [3,1]

Sentences as Constraints
Adding a sentence to our knowledge base constrains the number of possible models:
KB: Nothing
Possible Models
false false false false false true false true false false true true
true false false true false true true true false true true true

Sentences as Constraints
Adding a sentence to our knowledge base constrains the number of possible models:
KB: Nothing
KB: [(P ∧ ¬Q) ∨ (Q ∧ ¬P)] ⇒ R
Possible Models
false false false false false true false true false false true true
true false false
true false true true true false true true true

Sentences as Constraints
Adding a sentence to our knowledge base constrains the number of possible models:
false false false
false false true
false true false
false true true
true false false
true false true
true true false
true true true
KB: Nothing
KB: [(P ∧ ¬Q) ∨ (Q ∧ ¬P)] ⇒ R KB: R, [(P ∧ ¬Q) ∨ (Q ∧ ¬P)] ⇒ R
Possible Models

Inference Algorithms
Simple model checking
Efficient Model Checking via Satisfiability Theorem proving

Simple Model Checking
function TT-ENTAILS?(KB, α) returns true or false
return TT-CHECK-ALL(KB, α, symbols(KB) U symbols(α),{})
function TT-CHECK-ALL(KB, α, symbols, model) returns true or false if empty?(symbols) then
if PL-TRUE?(KB, model) then return PL-TRUE?(α, model)
else return true else
P ← first(symbols)
rest ← rest(symbols)
return and (TT-CHECK-ALL(KB, α, rest, model ∪ {P = true})
TT-CHECK-ALL(KB, α, rest, model ∪ {P = false }))

Simple Model Checking, contd.
❖ Same recursion as backtracking
❖ O(2n) time, linear space
❖ We can do much better!
P1=true P2=true
P1=false P2=false

Efficient Model Checking via Satisfiability
❖ Assume we have a hyper-efficient SAT solver (DPLL); how can we use it to test entailment?
❖ Suppose 𝛼 ⊨ 𝛽
❖ Then 𝛼 ⇒ 𝛽 is true in all worlds (Deduction theorem)
❖ Hence ¬(𝛼 ⇒ 𝛽) is false in all worlds
❖ Hence 𝛼 ∧ ¬𝛽 is false in all worlds, i.e., unsatisfiable
❖ So, add the negated conclusion to what you know, test for (un)satisfiability; also known as reductio ad absurdum
❖ Efficient SAT solvers operate on conjunctive normal form

Conjunctive Normal Form (CNF)
❖ Every sentence can be expressed as a conjunction of clauses
❖ A clause is a disjunction of literals
❖ A literal is a symbol or a negated symbol
❖ Conversion to CNF by a sequence of standard transformations:
❖ 𝐵!,!⇔(𝑃!,#∨𝑃#,!)
❖ (𝐵!,! ⇒ (𝑃!,# ∨ 𝑃#,!)) ∧ ((𝑃!,# ∨ 𝑃#,!) ⇒ 𝐵!,!)
❖ (¬𝐵!,! ∨ 𝑃!,# ∨ 𝑃#,!) ∧ (¬(𝑃!,# ∨ 𝑃#,!) ∨ 𝐵!,!)
❖ (¬𝐵!,! ∨ 𝑃!,# ∨ 𝑃#,!) ∧ ((¬𝑃!,# ∧ ¬𝑃#,!) ∨ 𝐵!,!)
❖ (¬𝐵!,! ∨ 𝑃!,# ∨ 𝑃#,!) ∧ (¬𝑃!,# ∨ 𝐵!,!) ∧ (¬𝑃#,! ∨ 𝐵!,!)

Inference via Theorem Proving
❖ KB: set of sentences
❖ Inference rule specifies when:
❖ If certain sentences belong to KB, you can add certain other sentences to KB
❖ Proof (KB ⊢ 𝛼) is a sequence of applications of inference rules starting from KB and ending in 𝛼
❖ Inference is a completely mechanical operation guided by syntax, no reference to possible worlds

Example of Inference Rules
❖ Modus ponens: %⇒’,% ‘
❖ And elimination: %∧’ %
❖ Biconditional elimination: %⇔’ (%⇒’)∧(‘⇒%)

Soundness and Completeness
❖ We want inference to be sound:
❖ IfwecanproveBfromA(A⊢B),thenA⊨B
❖ We would like inference to be complete: ❖ IfA⊨B,thenwecanproveBfromA(A⊢B)
❖ These are properties of the relationship between proof and truth.

PL is Sound and Complete!
❖ Theorem: Sound and complete inference can be achieved in PL with one rule: resolution
❖ More generally, #∨%,¬%∨’
❖ More generally yet, #!∨⋯∨#”∨%,¬%∨’!∨⋯’# #!∨⋯∨#”∨’!∨⋯’#
❖ Only apply on clauses
❖ Not restrictive since any PL sentence can be written CNF

Resolution Algorithm
❖ How to perform inference with resolution?
❖ Show KB ⊨ 𝛼 by showing unsatisfability of (KB ∧ ¬𝛼)
function PL-RESOLUTION(KB, α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of KB ∧ ¬α new ← { }
for each pair of clauses Ci, Cj in clauses do
resolvents ← PL-RESOLVE(Ci, Cj)
if resolvents contains the empty clause then return true new ← new ∪ resolvents
if new ⊆ clauses then return false clauses ← clauses ∪ new

Concluding Remarks
❖ Logical Agents
❖ Knowledge-based agents that can reason about the world
❖ Propositional Logic
❖ Syntax vs semantics
❖ Algorithms for drawing logical consequences (entailment and inference)
❖ For more information:
❖ AIMA, Chapter 7 for Logical Agents

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com